### Abstract

Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.

Original language | English |
---|---|

Title of host publication | IEEE 1988 Int Symp on Inf Theory Abstr of Pap |

Place of Publication | New York, NY, USA |

Publisher | Publ by IEEE |

Pages | 164-165 |

Number of pages | 2 |

Volume | 25 n 13 |

Publication status | Published - 1988 |

Externally published | Yes |

### Fingerprint

### ASJC Scopus subject areas

- Engineering(all)

### Cite this

*IEEE 1988 Int Symp on Inf Theory Abstr of Pap*(Vol. 25 n 13, pp. 164-165). New York, NY, USA: Publ by IEEE.

**Generalized vector quantization with optimal connection of elements.** / Matsuyama, Yasuo.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*IEEE 1988 Int Symp on Inf Theory Abstr of Pap.*vol. 25 n 13, Publ by IEEE, New York, NY, USA, pp. 164-165.

}

TY - GEN

T1 - Generalized vector quantization with optimal connection of elements

AU - Matsuyama, Yasuo

PY - 1988

Y1 - 1988

N2 - Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.

AB - Summary form only given. Generalized algorithms for vector quantization are presented and their convergence to empirical data is proved. The generalized vector quantization allows adjusted variable dimensional vectors covering variable subregions of source data. Therefore, this class of algorithms is called variable region vector quantization. Algorithm I is the generalization of the GLA into the variable region case. This is called full-gain variable region vector quantization. Algorithm II, on the other hand, is the variable region generalization of the gain-shape type. The formation of each variable subregion is due to the connection or grouping of elements so that the resulting set of variable dimensional super-vectors has the minimum distortion to a codebook. Algorithm III considers encoding and decoding for data compression. Algorithm IV gives the suboptimal minimization for the alleviation of computational load. Examples of region optimization on speech and images are given. Methods presented here are applicable and matches to various pattern handling such as neural algorithms of parallel distributed processing. Results obtained by fine-grain parallel computation with a guarded Horn clauses front-end is also given.

UR - http://www.scopus.com/inward/record.url?scp=0024122808&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0024122808&partnerID=8YFLogxK

M3 - Conference contribution

VL - 25 n 13

SP - 164

EP - 165

BT - IEEE 1988 Int Symp on Inf Theory Abstr of Pap

PB - Publ by IEEE

CY - New York, NY, USA

ER -