> Because all transmissions are secured by checksums and automatic query, no transmission errors are possible.
Just a reminder to people implementing systems with error detection: undetected errors are always possible. Checksums may be just fine for this application but if you need to achieve some target error rate you may have to consider error detecting or correcting codes that fit with the interference you find in your transmission channel.
[edit] Error detecting checks work not by making errors impossible but by making them unlikely. It is part of the work to quantify your design to show that probability is low enough for your goal.
The German version is more clear, "automatic query" is translated more literally as "automatic callback", by which I assume is meant that the device requests retransmission in case of an error.
Yes, if an error is detected there’s an automatic retry. I just want to remind others that checksums or any scheme has undetected errors, in which case a retry doesn’t happen. The human in the loop will say, that was garbled, could you repeat?
Again this design may be totally fine for this application. I am bringing this up for other engineers because people tend to hand-wave this away.
If you hashed every 1000 symbols with a 512 bit hash to check message integrity and retransmit, a hash collision would be practically impossible, so "no transmission errors are possible" is perfectly fair to say under some circumstances.
Is this what they are doing? Seems unlikely that you would transmit so much overhead when the data rate is so low, unless you needed to overcome a lot of noise.
It’s about the numbers, there’s trade-offs for each specific application. I would encourage people to do the math and see if their design makes sense for their goals.
There's virtually no benefit to dedicating so much of your message to checksums with any remotely efficient algorithm. More practically, any CRC >= 32 bits is probably overkill and any CRC > 64 bits is definitely overkill until you get up into gargantuan message sizes.
Agreed, for this application (texting) a simple approach is probably best. You have to look at the particulars of your domain, it’s not one size fits all.
If you're concerned about one or two bits being toggled, a 512-bit hash will merely increase the chance of bit errors; on a noisy medium, using such a big "checksum" will BOTH detect and cause a high number of packet faults.
A piece of communications gadgetry even mentioning checksums is like a car salesman proudly claiming "... and this baby comes with working brakes!" The very pronouncement leads to worry, and additional questions (like this very thread, QED :-) ).
My inexpert digging came up with: They use APRS packets, which use AX.25, whose framing includes a 16-bit Frame Check Sequence, which looks like it came from HDLC, and is a 16-bit CRC-CCITT. Phew.
True story: an embedded target that was loaded over the network (I think TFTP) would boot slowly and then crash when loaded from Workstation 1, but not Workstation 2.
Turns out there was a bad port on the Ethernet hub, but 1/65536 corrupted packets would get through because of the 16-bit checksum.
I started to reintroduce them into my vocabulary. I found myself qualifying nearly everything, and it decreases the clarity of the discussion, feels lawyer-like. I'm not drafting a contract (most of the time), but trying to communicate a complicated principle in a few words, where every extra word detracts from the meaning.
Just a reminder to people implementing systems with error detection: undetected errors are always possible. Checksums may be just fine for this application but if you need to achieve some target error rate you may have to consider error detecting or correcting codes that fit with the interference you find in your transmission channel.
[edit] Error detecting checks work not by making errors impossible but by making them unlikely. It is part of the work to quantify your design to show that probability is low enough for your goal.