Generalized vector quantization with optimal connection of elements

Yasuo Matsuyama

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.

Original languageEnglish
Title of host publicationIEEE 1988 Int Symp on Inf Theory Abstr of Pap
Place of PublicationNew York, NY, USA
PublisherPubl by IEEE
Pages164-165
Number of pages2
Volume25 n 13
Publication statusPublished - 1988
Externally publishedYes

Fingerprint

Vector quantization
Data compression
Decoding
Processing

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Matsuyama, Y. (1988). Generalized vector quantization with optimal connection of elements. In IEEE 1988 Int Symp on Inf Theory Abstr of Pap (Vol. 25 n 13, pp. 164-165). New York, NY, USA: Publ by IEEE.

Generalized vector quantization with optimal connection of elements. / Matsuyama, Yasuo.

IEEE 1988 Int Symp on Inf Theory Abstr of Pap. Vol. 25 n 13 New York, NY, USA : Publ by IEEE, 1988. p. 164-165.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Matsuyama, Y 1988, Generalized vector quantization with optimal connection of elements. in IEEE 1988 Int Symp on Inf Theory Abstr of Pap. vol. 25 n 13, Publ by IEEE, New York, NY, USA, pp. 164-165.
Matsuyama Y. Generalized vector quantization with optimal connection of elements. In IEEE 1988 Int Symp on Inf Theory Abstr of Pap. Vol. 25 n 13. New York, NY, USA: Publ by IEEE. 1988. p. 164-165
Matsuyama, Yasuo. / Generalized vector quantization with optimal connection of elements. IEEE 1988 Int Symp on Inf Theory Abstr of Pap. Vol. 25 n 13 New York, NY, USA : Publ by IEEE, 1988. pp. 164-165
@inproceedings{d8eb3d47c15d443aa6b8aebf8794eb80,
title = "Generalized vector quantization with optimal connection of elements",
abstract = "Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.",
author = "Yasuo Matsuyama",
year = "1988",
language = "English",
volume = "25 n 13",
pages = "164--165",
booktitle = "IEEE 1988 Int Symp on Inf Theory Abstr of Pap",
publisher = "Publ by IEEE",

}

TY - GEN

T1 - Generalized vector quantization with optimal connection of elements

AU - Matsuyama, Yasuo

PY - 1988

Y1 - 1988

N2 - Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.

AB - Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.

UR - http://www.scopus.com/inward/record.url?scp=0024122808&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0024122808&partnerID=8YFLogxK

M3 - Conference contribution

VL - 25 n 13

SP - 164

EP - 165

BT - IEEE 1988 Int Symp on Inf Theory Abstr of Pap

PB - Publ by IEEE

CY - New York, NY, USA

ER -