Video codecs seek to represent a fundamentally analog data set in a digital format. Because of the design of analog video signals, which represent luma and color information separately, a common first step in image compression in codec design is to represent and store the image in a YCbCr color space.

Download the latest Codec for your computer here.

The conversion to YCbCr provides two benefits: first, it improves compressibility by providing decorrelation of the color signals; and second, it separates the luma signal, which is perceptually much more important, from the chroma signal, which is less perceptually important and which can be represented at lower resolution to achieve more efficient data compression. It is common to represent the ratios of information stored in these different channels in the following way Y:Cb:Cr. Refer to the following article for more information about Chroma subsampling.

Different codecs will use different chroma subsampling ratios as appropriate to their compression needs. Video compression schemes for Web and DVD make use of a 4:2:0 color sampling pattern, and the DV standard uses 4:1:1 sampling ratios. Professional video codecs designed to function at much higher bitrates and to record a greater amount of color information for post-production manipulation sample in 3:1:1 (uncommon), 4:2:2 and 4:4:4 ratios. Examples of these codecs include Panasonic's DVCPRO50 and DVCPROHD codecs (4:2:2), and then Sony's HDCAM-SR (4:4:4) or Panasonic's HDD5 (4:2:2). Apple's new Prores HQ 422 codec also samples in 4:2:2 color space. More codecs that sample in 4:4:4 patterns exist as well, but are less common, and tend to be used internally in post-production houses. It is also worth noting that video codecs can operate in RGB space as well. These codecs tend not to sample the red, green, and blue channels in different ratios, since there is less perceptual motivation for doing so—just the blue channel could be undersampled.

Some amount of spatial and temporal downsampling may also be used to reduce the raw data rate before the basic encoding process. The most popular such transform is the 8x8 discrete cosine transform (DCT). Codecs which make use of a wavelet transform are also entering the market, especially in camera workflows which involve dealing with RAW image formatting in motion sequences. The output of the transform is first quantized, then entropy encoding is applied to the quantized values. When a DCT has been used, the coefficients are typically scanned using a zig-zag scan order, and the entropy coding typically combines a number of consecutive zero-valued quantized coefficients with the value of the next non-zero quantized coefficient into a single symbol, and also has special ways of indicating when all of the remaining quantized coefficient values are equal to zero. The entropy coding method typically uses variable-length coding tables. Some encoders can compress the video in a multiple step process called n-pass encoding (e.g. 2-pass), which performs a slower but potentially better quality compression.

The decoding process consists of performing, to the extent possible, an inversion of each stage of the encoding process. The one stage that cannot be exactly inverted is the quantization stage. There, a best-effort approximation of inversion is performed. This part of the process is often called "inverse quantization" or "dequantization", although quantization is an inherently non-invertible process.

This process involves representing the video image as a set of macroblocks. For more information about this critical facet of video codec design, see B-frames.

Video codec designs are often standardized or will be in the future- i.e., specified precisely in a published document. However, only the decoding process needs to be standardized to enable interoperability. The encoding process is typically not specified at all in a standard, and implementers are free to design their encoder however they want, as long as the video can be decoded in the specified manner. For this reason, the quality of the video produced by decoding the results of different encoders that use the same video codec standard can vary dramatically from one encoder implementation to another.