### Abstract

Harmonic competition is a learning strategy based upon winner-take-all or winner-take-quota with respect to a composite of heterogeneous subcosts. This learning is unsupervised and organizes itself. The subcosts may conflict with each other. Thus, the total learning system realizes a self-organizing multiple criteria optimization. The subcosts are combined additively and multiplicatively using adjusting parameters. For such a total cost, a general successive learning algorithm is derived first. Then, specific problems in the Euclidian space are addressed. Vector quantization with various constraints and traveling salesperson problems are selected as test problems. The former is a typical class of problems where the number of neurons is less than that of the data. The latter is an opposite case. Duality exists in these two classes. In both cases, the combination parameters of the subcosts show wide dynamic ranges in the course of learning. It is possible, however, to decide the parameter control from the structure of the total cost. This method finds a preferred solution from the Pareto optimal set of the multiple object optimization. Controlled mutations motivated by genetic algorithms are proved to be effective in finding near-optimal solutions. All results show significance of the additional constraints and the effectiveness of the dynamic parameter control.

Original language | English |
---|---|

Pages (from-to) | 652-668 |

Number of pages | 17 |

Journal | IEEE Transactions on Neural Networks |

Volume | 7 |

Issue number | 3 |

DOIs | |

Publication status | Published - 1996 |

### Fingerprint

### ASJC Scopus subject areas

- Control and Systems Engineering
- Theoretical Computer Science
- Electrical and Electronic Engineering
- Artificial Intelligence
- Computational Theory and Mathematics
- Hardware and Architecture

### Cite this

*IEEE Transactions on Neural Networks*,

*7*(3), 652-668. https://doi.org/10.1109/72.501723

**Harmonic competition : A self-organizing multiple criteria optimization.** / Matsuyama, Yasuo.

Research output: Contribution to journal › Article

*IEEE Transactions on Neural Networks*, vol. 7, no. 3, pp. 652-668. https://doi.org/10.1109/72.501723

}

TY - JOUR

T1 - Harmonic competition

T2 - A self-organizing multiple criteria optimization

AU - Matsuyama, Yasuo

PY - 1996

Y1 - 1996

N2 - Harmonic competition is a learning strategy based upon winner-take-all or winner-take-quota with respect to a composite of heterogeneous subcosts. This learning is unsupervised and organizes itself. The subcosts may conflict with each other. Thus, the total learning system realizes a self-organizing multiple criteria optimization. The subcosts are combined additively and multiplicatively using adjusting parameters. For such a total cost, a general successive learning algorithm is derived first. Then, specific problems in the Euclidian space are addressed. Vector quantization with various constraints and traveling salesperson problems are selected as test problems. The former is a typical class of problems where the number of neurons is less than that of the data. The latter is an opposite case. Duality exists in these two classes. In both cases, the combination parameters of the subcosts show wide dynamic ranges in the course of learning. It is possible, however, to decide the parameter control from the structure of the total cost. This method finds a preferred solution from the Pareto optimal set of the multiple object optimization. Controlled mutations motivated by genetic algorithms are proved to be effective in finding near-optimal solutions. All results show significance of the additional constraints and the effectiveness of the dynamic parameter control.

AB - Harmonic competition is a learning strategy based upon winner-take-all or winner-take-quota with respect to a composite of heterogeneous subcosts. This learning is unsupervised and organizes itself. The subcosts may conflict with each other. Thus, the total learning system realizes a self-organizing multiple criteria optimization. The subcosts are combined additively and multiplicatively using adjusting parameters. For such a total cost, a general successive learning algorithm is derived first. Then, specific problems in the Euclidian space are addressed. Vector quantization with various constraints and traveling salesperson problems are selected as test problems. The former is a typical class of problems where the number of neurons is less than that of the data. The latter is an opposite case. Duality exists in these two classes. In both cases, the combination parameters of the subcosts show wide dynamic ranges in the course of learning. It is possible, however, to decide the parameter control from the structure of the total cost. This method finds a preferred solution from the Pareto optimal set of the multiple object optimization. Controlled mutations motivated by genetic algorithms are proved to be effective in finding near-optimal solutions. All results show significance of the additional constraints and the effectiveness of the dynamic parameter control.

UR - http://www.scopus.com/inward/record.url?scp=0030149635&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0030149635&partnerID=8YFLogxK

U2 - 10.1109/72.501723

DO - 10.1109/72.501723

M3 - Article

VL - 7

SP - 652

EP - 668

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 3

ER -