# Implementation of vector quantization technique in image compression and speech recognition

- Abstract
- Introduction
- Codebook design
- LBG algorithm
- Mathematical analysis of LBG algorithm

- Splitting codebook design
- Experiment and output
- Vector quantization in speech recognition
- Algorithm
- Pre processing
- Mel frequency wrapping (MFCC)
- Principle of MFCC
- Steps used in MFCC

- Vector quantization
- Initialization
- Vector coding
- Codebook updating
- Quantization error calculation

- Results
- Conclusion
- References

In this paper, we have described the application of Vector Quantization in Image and Speech processing. Compressing image data by using Vector Quantization (VQ)[1]-[3] will compare Training **Vectors** with Codebook. The result is an index of position with minimum distortion. Basically there are three ways of generating the code book: 1)The random method, 2)Pair wise nearest neighbor clustering and 3)Splitting. The implementing Random Codebook will reduce the image quality. This paper presents the Splitting solution to implement the Codebook, which improves the image quality by the average Training **Vectors**, then splits the average result to Codebook that has minimum distortion. The result from this presentation will give the better quality of the image than using Random Codebook.

[...] The result is reducing scatter data better than random sample (Random Codebook) Vector Quantization In Image Compression using Vector Quantization, an input image is divided into small blocks called Training Vectors This Training Vectors can be closely reconstructed from applying a transfer function to a specific region of an input image itself, which is called Codebook ( x?i Thus, only the set of transfer functions, which have fewer data than an image, were required for reconstruct the input image back. [...]

[...] The areas on the diagram which would represent abrupt intensity changes from one pixel to the next are sparsely populated. n = Split the Codebook to 2 vectors: ^x1(1)+?1 and ^x1(1)-? when ?1 is the average value as shown in figure 2 Ni = J = 1,2,3 Nc FIGURE 2. Distribution of pairs of adjacent pixels from grayscale Lena. FIGURE 4. Vector quantization to 4 bits per 2D-vector. Figure 4 shows how things look with VQ. As in Figure the codebook vectors are represented as big red dots, and the red lines delimit their zones of influence. [...]

[...] The summation represents the total quantization error of the codebook Results When input speech is given to microphone, and if it is recognized correctly and matched with the code book coefficient, following results are generated as shown in following figures. om .wav Figure 8 Flowchart of VQ-LBG algorithm Vector quantization gains its name from the fact that it is a quantization method that deals with vectors rather than individual samples or scalars. A training pattern is formed by concatenating the MFCCs extracted from the available training samples. [...]