A policy-aware parallel execution control mechanism for language application

Mai Xuan Trang, Yohei Murakami, Toru Ishida

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Many language resources have been shared as web services to process data on the internet. As data sets keep growing, language services are experiencing more big data problems, such as challenging demands on storage and processing caused by very large data sets such as huge amounts of multilingual texts. Handling big data volumes like this requires parallel computing architectures. Parallel execution is one way to improve performance of language services when processing huge amounts of data. The large data set is partitioned and multiples processes of the language service are executed concurrently. However, due to limitation of computing resources, service providers employ policies to limit number of concurrent processes that their services could serve. In an advanced language application, several language services, provided by different providers with different policies, are combined in a composite service to handle complex tasks. If parallel execution is used for greater efficiency of a language application we need to optimize the parallel configuration by working with the language service policies of all participating providers. We propose a model that considers the atomic language service policies when predicting composite service performance. Based on this model, we design a mechanism that adapts parallel execution setting of a composite service to atomic services’ policies in order to attain optimal performance for the language application.

Original languageEnglish
Title of host publicationWorldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers
EditorsDonghui Lin, Yohei Murakami
PublisherSpringer-Verlag
Pages71-85
Number of pages15
ISBN (Print)9783319314679
DOIs
Publication statusPublished - 2016 Jan 1
Externally publishedYes
Event2nd International Workshop on Worldwide Language Service Infrastructure, WLSI 2015 - Kyoto, Japan
Duration: 2015 Jan 222015 Jan 23

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9442
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference2nd International Workshop on Worldwide Language Service Infrastructure, WLSI 2015
CountryJapan
CityKyoto
Period15/1/2215/1/23

Fingerprint

Composite materials
Parallel processing systems
Processing
Web services
Internet
Composite
Large Data Sets
Policy
Language
Resources
Big data
Parallel Computing
Web Services
Concurrent
Optimise
Configuration
Computing
Model

Keywords

  • Adaptation mechanism
  • Big data
  • Language service composition
  • Parallel execution

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Trang, M. X., Murakami, Y., & Ishida, T. (2016). A policy-aware parallel execution control mechanism for language application. In D. Lin, & Y. Murakami (Eds.), Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers (pp. 71-85). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9442). Springer-Verlag. https://doi.org/10.1007/978-3-319-31468-6_5

A policy-aware parallel execution control mechanism for language application. / Trang, Mai Xuan; Murakami, Yohei; Ishida, Toru.

Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers. ed. / Donghui Lin; Yohei Murakami. Springer-Verlag, 2016. p. 71-85 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9442).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Trang, MX, Murakami, Y & Ishida, T 2016, A policy-aware parallel execution control mechanism for language application. in D Lin & Y Murakami (eds), Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9442, Springer-Verlag, pp. 71-85, 2nd International Workshop on Worldwide Language Service Infrastructure, WLSI 2015, Kyoto, Japan, 15/1/22. https://doi.org/10.1007/978-3-319-31468-6_5
Trang MX, Murakami Y, Ishida T. A policy-aware parallel execution control mechanism for language application. In Lin D, Murakami Y, editors, Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers. Springer-Verlag. 2016. p. 71-85. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-31468-6_5
Trang, Mai Xuan ; Murakami, Yohei ; Ishida, Toru. / A policy-aware parallel execution control mechanism for language application. Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers. editor / Donghui Lin ; Yohei Murakami. Springer-Verlag, 2016. pp. 71-85 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{2a1d99f1bbbc4df5a3ca6286fc87c17f,
title = "A policy-aware parallel execution control mechanism for language application",
abstract = "Many language resources have been shared as web services to process data on the internet. As data sets keep growing, language services are experiencing more big data problems, such as challenging demands on storage and processing caused by very large data sets such as huge amounts of multilingual texts. Handling big data volumes like this requires parallel computing architectures. Parallel execution is one way to improve performance of language services when processing huge amounts of data. The large data set is partitioned and multiples processes of the language service are executed concurrently. However, due to limitation of computing resources, service providers employ policies to limit number of concurrent processes that their services could serve. In an advanced language application, several language services, provided by different providers with different policies, are combined in a composite service to handle complex tasks. If parallel execution is used for greater efficiency of a language application we need to optimize the parallel configuration by working with the language service policies of all participating providers. We propose a model that considers the atomic language service policies when predicting composite service performance. Based on this model, we design a mechanism that adapts parallel execution setting of a composite service to atomic services’ policies in order to attain optimal performance for the language application.",
keywords = "Adaptation mechanism, Big data, Language service composition, Parallel execution",
author = "Trang, {Mai Xuan} and Yohei Murakami and Toru Ishida",
year = "2016",
month = "1",
day = "1",
doi = "10.1007/978-3-319-31468-6_5",
language = "English",
isbn = "9783319314679",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer-Verlag",
pages = "71--85",
editor = "Donghui Lin and Yohei Murakami",
booktitle = "Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers",

}

TY - GEN

T1 - A policy-aware parallel execution control mechanism for language application

AU - Trang, Mai Xuan

AU - Murakami, Yohei

AU - Ishida, Toru

PY - 2016/1/1

Y1 - 2016/1/1

N2 - Many language resources have been shared as web services to process data on the internet. As data sets keep growing, language services are experiencing more big data problems, such as challenging demands on storage and processing caused by very large data sets such as huge amounts of multilingual texts. Handling big data volumes like this requires parallel computing architectures. Parallel execution is one way to improve performance of language services when processing huge amounts of data. The large data set is partitioned and multiples processes of the language service are executed concurrently. However, due to limitation of computing resources, service providers employ policies to limit number of concurrent processes that their services could serve. In an advanced language application, several language services, provided by different providers with different policies, are combined in a composite service to handle complex tasks. If parallel execution is used for greater efficiency of a language application we need to optimize the parallel configuration by working with the language service policies of all participating providers. We propose a model that considers the atomic language service policies when predicting composite service performance. Based on this model, we design a mechanism that adapts parallel execution setting of a composite service to atomic services’ policies in order to attain optimal performance for the language application.

AB - Many language resources have been shared as web services to process data on the internet. As data sets keep growing, language services are experiencing more big data problems, such as challenging demands on storage and processing caused by very large data sets such as huge amounts of multilingual texts. Handling big data volumes like this requires parallel computing architectures. Parallel execution is one way to improve performance of language services when processing huge amounts of data. The large data set is partitioned and multiples processes of the language service are executed concurrently. However, due to limitation of computing resources, service providers employ policies to limit number of concurrent processes that their services could serve. In an advanced language application, several language services, provided by different providers with different policies, are combined in a composite service to handle complex tasks. If parallel execution is used for greater efficiency of a language application we need to optimize the parallel configuration by working with the language service policies of all participating providers. We propose a model that considers the atomic language service policies when predicting composite service performance. Based on this model, we design a mechanism that adapts parallel execution setting of a composite service to atomic services’ policies in order to attain optimal performance for the language application.

KW - Adaptation mechanism

KW - Big data

KW - Language service composition

KW - Parallel execution

UR - http://www.scopus.com/inward/record.url?scp=84961692502&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84961692502&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-31468-6_5

DO - 10.1007/978-3-319-31468-6_5

M3 - Conference contribution

SN - 9783319314679

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 71

EP - 85

BT - Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers

A2 - Lin, Donghui

A2 - Murakami, Yohei

PB - Springer-Verlag

ER -