In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces the lossless compression algorithms commonly used in computers, which have a certain reference value. Interested friends can refer to them. I hope you will gain a lot after reading this article. Let's take a look at it.
The commonly used lossless compression algorithms are: 1, LZ77 algorithm, which is the basis of many other lossless compression algorithms; 2, LZR algorithm, an algorithm designed to improve LZ77; 3, LZSS algorithm, which aims to become a linear time replacement algorithm for LZ77; 4, DEFLATE algorithm; 5, LZMA algorithm and so on.
Data compression is the process of reducing file size while retaining the same or most of the data. Its principle is to eliminate unnecessary data or reorganize data in a more efficient format. When compressing data, you can choose to use lossy or lossless methods. The lossy method will permanently erase some data, while the lossless method can guarantee the holding of all the data. What kind of method you use depends on how accurate you want your document to be.
This article will introduce you to six different lossless data compression algorithms and four image / video compression algorithms based on deep learning.
6 lossless data compression algorithms
Lossless compression algorithms are usually used for archiving or other high-fidelity purposes. These algorithms allow you to reduce file size while ensuring that files can be fully recovered. There are many lossless compression algorithms for you to choose from. Here are six commonly used algorithms:
1. LZ77
The LZ77 algorithm was released in 1977. As the basis of many other lossless compression algorithms, it uses the concept of "sliding window". In this concept, LZ77 manages a dictionary. The dictionary uses triples:
Offset (Offset): the distance between the beginning of a phrase and the beginning of a file
Stroke length (Run length): the number of characters that make up the phrase
Deviation character: a marker indicating a new phrase, the first symbol in the forward buffer at the end of the match
When the file is parsed, the dictionary is updated in real time to reflect the latest compressed data and size. For example, if a file contains the string "abbadabba", the item that is compressed into the dictionary is "abb." You can take a look at the disassembly process in the following table:
In this example, the compressed data is not much smaller than the initial data. But in general, this compression effect will show up when the file is very long.
2. LZR
LZR was proposed by Michael Rodeh in 1981, and it is developed on the basis of LZ77. The goal of this algorithm is to become a linear time replacement algorithm for LZ77, but the encoded Udell pointer may point to any offset of the file, which means it consumes a lot of memory and therefore does not perform as well as LZ77.
3. LZSS
LZSS, whose full name is Lempel-Ziv-Storer-Szymanski, was proposed in 1982. It is also an algorithm designed to improve LZ77. It introduces a method to detect whether the file size has really been reduced. If it fails to achieve the compression effect, the original input format is maintained. LZSS also removes the use of deviating characters, using only pairs. This compression algorithm is widely used in archive formats such as RAR and network data compression.
4. DEFLATE
DEFLATE algorithm was proposed in 1993. The author is Phil Katz. The algorithm combines LZ77 or LZSS preprocessor with Huffman coding. Hoffman coding is a lawsuit proposed in 1952. It is a kind of entropy coding, which is mainly based on the occurrence frequency of characters.
5. LZMA
LZMA algorithm, whose full name is Lempel-Ziv Markov chain Algorithm (LZMA), was proposed in 1998. It is an improved version of LZ77 and aims to archive 7-ZIp files in .7z format. It uses chained compression and applies the modified LZ77 algorithm at the bit rather than byte level. The output of the compression algorithm is later processed by arithmetic coding for further compression. Depending on the implementation, other compression steps may be introduced.
6. LZMA2
The LZMA2 algorithm was proposed in 2009 and is an improved version of LZMA. It improves the performance of LZMA in multithreading capabilities and improves the performance of handling incompressible types of data.
Four Image / Video Compression algorithms based on Deep Learning
In addition to the static compression algorithm described above, there are compression algorithms based on deep learning to choose from.
1. Compression algorithm based on Multi-layer Perceptron
Multilayer perceptron (Multi-Layer Perceptron,MLP) technology uses multi-layer neurons to obtain, process and output data. It can be applied to data dimensionality reduction and data compression. The first MLP-based algorithm was proposed in 1988 and has been applied to:
Binary coding-- Standard double symbol coding
Quantization-limits the input from continuous sets to discrete sets
Domain-specific transformations-pixel-level data changes
The MLP algorithm uses the output of the last step of the decomposition neural network to determine the best binary code combination. Later, the prediction technique is used to optimize this method. Prediction technology can improve data accuracy through back propagation based on adjacent data.
2. DeepCoder-- depth Neural Network based on Video Compression
DeepCoder is a framework based on convolutional neural network (CNN), which is an alternative to traditional video compression technology. The model uses a separate CNN for the prediction signal and the residual signal. It uses scalar quantization and a traditional file compression algorithm-Huffman coding-to map coding features to a binary stream. It is generally believed that the performance of this model is better than that of the famous H.264/AVC video coding specification.
3. Compression algorithm based on CNN
CNN is a hierarchical neural network, which is usually used for image recognition and feature detection. When applied to compression, these neural networks use convolution operations to calculate the correlation between adjacent pixels. CNN shows better compression results than the MLP-based algorithm, improving super-resolution performance and reducing artifacts. In addition, CNN-based compression also improves the quality of JPEG images because it reduces peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). CNN-based compression also achieves the performance of HEVC by using entropy estimation.
4. Compression algorithm based on spanning countermeasure network (GAN)
GAN is a kind of neural network, which uses two neural networks to compete with each other to produce more accurate analysis and prediction. The first GAN-based compression algorithm was proposed in 2017. The file compression ratio of these algorithms is 2.5 times that of other common methods (such as JPEG, WebP, etc.). You can use GAN-based methods to achieve real-time compression through parallel processing. The main principle is to compress the picture based on the most relevant features. When decoding, the algorithm reconstructs the image based on these features. Compared with the CNN-based algorithm, the GAN-based compression algorithm can produce higher quality images by eliminating the countermeasure loss.
Thank you for reading this article carefully. I hope the article "what are the lossless compression algorithms commonly used in computers" shared by the editor will be helpful to everyone? at the same time, I also hope that you will support and pay attention to the industry information channel. more related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.