Abstract
Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.
Original language | English |
---|---|
Title of host publication | IEEE 1988 Int Symp on Inf Theory Abstr of Pap |
Place of Publication | New York, NY, USA |
Publisher | Publ by IEEE |
Pages | 164-165 |
Number of pages | 2 |
Volume | 25 n 13 |
Publication status | Published - 1988 |
Externally published | Yes |
ASJC Scopus subject areas
- Engineering(all)