Many language resources have been shared as web services to process data on the internet. As data sets keep growing, language services are experiencing more big data problems, such as challenging demands on storage and processing caused by very large data sets such as huge amounts of multilingual texts. Handling big data volumes like this requires parallel computing architectures. Parallel execution is one way to improve performance of language services when processing huge amounts of data. The large data set is partitioned and multiples processes of the language service are executed concurrently. However, due to limitation of computing resources, service providers employ policies to limit number of concurrent processes that their services could serve. In an advanced language application, several language services, provided by different providers with different policies, are combined in a composite service to handle complex tasks. If parallel execution is used for greater efficiency of a language application we need to optimize the parallel configuration by working with the language service policies of all participating providers. We propose a model that considers the atomic language service policies when predicting composite service performance. Based on this model, we design a mechanism that adapts parallel execution setting of a composite service to atomic services’ policies in order to attain optimal performance for the language application.