Liste des Groupes | Revenir à cl c |
On 07/06/2024 06:20, Mikko wrote:You don't need a standard table. You need statistics. Once you have theOn 2024-06-06 19:09:03 +0000, Malcolm McLean said:Yes, but Huffman is easy to decode. It's the sort of project you give to people who have just got past the beginner stage but aren't very experienced programmers yet, whilst implementing Lempel-Ziv is a job for someone who knows what he is doing.
On 06/06/2024 17:56, Ben Bacarisse wrote:Works if one knows at the time one makes ones compression andMalcolm McLean <malcolm.arthur.mclean@gmail.com> writes:What I was thing of was using Huffman codes to convert ASCII to a string of of bits.
Not strictly a C programming question, but smart people will see theI must not be smart as I can't see any connection to the topic of this
relavance to the topicality, which is portability.
group!
Is there a compresiion algorthim which converts human language ASCII textObviously such algorithms exist. One that is used a lot is just base64
to compressed ASCII, preferably only "isgraph" characters?
So "Mary had a little lamb, its fleece was white as snow".
Would become
QWE£$543GtT£$"||x|VVBB?
encoding of binary compressed text, but that won't beat something
specifically crafted for the task which is presumably what you are
asking for. I don't know of anything aimed at that task specifically.
One thing you should specify is whether you need it to work on small
texts, or, even better, at what sort of size you want the pay-off to
start to kick in. For example, the xz+base64 encoding of the complete
works of Shakespeare is still less than 40% of the size of the original
but your single line will end up much larger using that off-the-shelf
scheme.
decmpression algorithms how often each short sequence of characters
will be used in the files that will be compressed. If you have an
adaptive Huffman coding (or any other adaptive coding) a single error
will corrupt the rest of your line. If you reset the adaptation at the
end of each line it does not adapt well and the result is not much
better than without adaptation. If you reset the adaptation at the
end of each page you can have better compression but an error corrupts
the rest of the page.
For ordinary texts (except short ones) and many other purposes Lempel-Ziv
and its variants work better than Huffman.
Because the lines will often be very short, adaptive Huffman coding is no good. I need a fixed Huffman table with 128 entries for each 7 bit value plus one for "stop". I wonder if any such standard table exists.
Les messages affichés proviennent d'usenet.