In contrast with product quantization, we focus on the performanc

In contrast with product quantization, we focus on the performance for unstructured vector data. We introduce residual vector quantization, AGI-6780? which is appropriate for unstructured Inhibitors,Modulators,Libraries data, for the vector encoding. An efficient exhaustive search method is proposed based on fast distance computing. Inhibitors,Modulators,Libraries A non-exhaustive search method is proposed to improve the efficiency for large scale search. Our approaches are compared to two state-of-the-art methods, spectral hashing and product quantization, on both structured and unstructured datasets. Results show that our approaches obtain the best results in terms of accuracy and speed.Our paper is organized as follows: Inhibitors,Modulators,Libraries Section 2 presents the residual vector quantization and Section 3 introduces our exhaustive and non-exhaustive search methods that are based on the residual vector quantization.

Section 4 evaluates the search performance and compares our approaches with two state-of-the-art methods. Section 5 discusses the results and Section 6 is the conclusion.2.?Residual Vector QuantizationA K-point vector quantizerQ maps a vector xRD into its Inhibitors,Modulators,Libraries nearest centroidin codebook C = ci, i = 1..K RD:x?=Q(x)=arg?minci��C?d(x,?ci)(1)where d(x, ci) is the exact Euclidean distance between x and ci. This destructive process can be interpreted as approximating the x by one of centroids in RD space [18], and the residual vector is:?=x?x?=x?Q(x)(2)The performance of quantizer Q is measured by mean squared error (MSE):MSE(Q)=EX[d(x,?Q(x))2](3)Residual vector quantization [19,20] is a common technique to reduce the quantization error with several low complexity quantizers.

Residual vector quantization approximate the quantization error by another quantizer instead of discard it. Several stage-quantizers, each has its corresponding stage-codebook, are connected sequentially. Each stage-quantizer approximates preceding stage��s residual vector by one of AV-951 centroids in the stage-codebook and generates a new residual vector for succeeding quantization stage. Block diagrams of a two stages residual vector quantization are shown in Figure 1. In the learning phase (Figure 1(a)), a training vector set X is provided and the first stage-codebook C1 is generated by k-means clustering method. The entire training set is then quantized by the first stage-quantizer Q1 which is defined by C1.

The difference between X and its first stage quantization outputs, which is the first residual vector set E1, is used for learning the second stage-codebook C2. In quantizing phase (Figure 1(b)), the input selleck chemicals vector x is quantized by first stage-quantizer Q1, which is defined by first stage-codebook C1. The difference between x and its first stage quantization output, which is the first residual vector 1, is quantized by second stage-quantizer Q2. The second residual vector 2 is discarded.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>