Sunday, July 12, 2009

QUANTIZATION AND R-D FUNCTION

Data Compression: http://www.data-compression.com/theory.shtml

Quantization
has two important properties: 1) a Distortion resulting from the approximation and 2) a Bit-Rate resulting from binary encoding of its levels. Therefore the Quantizer design problem is a Rate-Distortion optimization type. (http://en.wikipedia.org/wiki/Quantization_(signal_processing))
A good tutorial on Vector Quantization: http://www.data-compression.com/vq.html. In 1980, Linde, Buzo, and Gray (LBG) proposed a VQ design algorithm based on a training sequence.
Some VQers:
  • Stanley Ahalt
  • Jim Fowler
  • Allen Gersho
  • Robert M. Gray
  • Batuhan Ulug

Rate-Distortion:
A good tutorial by Bernd Girod: http://www.stanford.edu/class/ee368b/Handouts/04-RateDistortionTheory.pdf
"Mutual Information" I(U;V) is the information that symbol U and symbol V convey about each other. Equivalently, I(U;V) is the communicated amount of information.
"Channel Capacity" C is the maximum mutual information between the transmitter and the receiver.
It is known that the Gaussian source is the most "difficult" source to encode: for a given mean square error, it requires the greatest number of bits. The performance of a practical compression system working on—say—images, may well be below the R(D) lower bound shown. ( http://en.wikipedia.org/wiki/Rate%E2%80%93distortion_theory )
Wyner and Ziv's paper "The Rate-Distortion Function for Source Coding with Side Information at the Decoder" provides the R-D function for the lossy DSC as well as its derivation. The proof procedure is crazy. I will read it when necessary.

No comments:

Post a Comment