Message-oriented parallel implementation of Moded Flat GHC

Kazunori Ueda, Masao Morita

    Research output: Contribution to journalArticle

    5 Citations (Scopus)

    Abstract

    We proposed in Ref. 5) a new, message-oriented implementation technique for Moded Flat GHC that compiled unification for data transfer into message passing. The technique was based on constraint-based program analysis, and significantly improved the performance of programs that used goals and streams to implement reconfigurable data structures. In this paper we discuss how the technique can be parallelized. We focus on a method for shared-memory multiprocessors, called the shared-goal method, though a different method could be used for distributed-memory multiprocessors. Unlike other parallel implementations of concurrent logic languages which we call process-oriented, the unit of parallel execution is not an individual goal but a chain of message sends caused successively by an initial message send. Parallelism comes from the existence of different chains of message sends that can be executed independently or in a pipelined manner. Mutual exclusion based on busy waiting and on message buffering controls access to individual, shared goals. Typical goals allow last-send optimization, the message-oriented counterpart of last-call optimization. We have built an experimental implementation on Sequent Symmetry. In spite of the simple scheduling currently adopted, preliminary evaluation shows good parallel speedup and good absolute performance for concurrent operations on binary process trees.

    Original languageEnglish
    Pages (from-to)323-341
    Number of pages19
    JournalNew Generation Computing
    Volume11
    Issue number3-4
    DOIs
    Publication statusPublished - 1993 Sep

    Fingerprint

    Parallel Implementation
    Data storage equipment
    Concurrent
    Message passing
    Data transfer
    Access control
    Distributed Memory multiprocessors
    Data structures
    Shared-memory multiprocessors
    Mutual Exclusion
    Optimization
    Program Analysis
    Scheduling
    Data Transfer
    Message Passing
    Unification
    Access Control
    Parallelism
    Data Structures
    Speedup

    Keywords

    • Concurrent Logic Programming
    • GHC
    • Moded Flat GHC
    • Mutual Exclusion
    • Optimization
    • Parallelism
    • Shared-Memory Multiprocessors

    ASJC Scopus subject areas

    • Theoretical Computer Science
    • Software
    • Hardware and Architecture
    • Computer Networks and Communications

    Cite this

    Message-oriented parallel implementation of Moded Flat GHC. / Ueda, Kazunori; Morita, Masao.

    In: New Generation Computing, Vol. 11, No. 3-4, 09.1993, p. 323-341.

    Research output: Contribution to journalArticle

    Ueda, Kazunori ; Morita, Masao. / Message-oriented parallel implementation of Moded Flat GHC. In: New Generation Computing. 1993 ; Vol. 11, No. 3-4. pp. 323-341.
    @article{97c47173425742ac8b204a8043355cbf,
    title = "Message-oriented parallel implementation of Moded Flat GHC",
    abstract = "We proposed in Ref. 5) a new, message-oriented implementation technique for Moded Flat GHC that compiled unification for data transfer into message passing. The technique was based on constraint-based program analysis, and significantly improved the performance of programs that used goals and streams to implement reconfigurable data structures. In this paper we discuss how the technique can be parallelized. We focus on a method for shared-memory multiprocessors, called the shared-goal method, though a different method could be used for distributed-memory multiprocessors. Unlike other parallel implementations of concurrent logic languages which we call process-oriented, the unit of parallel execution is not an individual goal but a chain of message sends caused successively by an initial message send. Parallelism comes from the existence of different chains of message sends that can be executed independently or in a pipelined manner. Mutual exclusion based on busy waiting and on message buffering controls access to individual, shared goals. Typical goals allow last-send optimization, the message-oriented counterpart of last-call optimization. We have built an experimental implementation on Sequent Symmetry. In spite of the simple scheduling currently adopted, preliminary evaluation shows good parallel speedup and good absolute performance for concurrent operations on binary process trees.",
    keywords = "Concurrent Logic Programming, GHC, Moded Flat GHC, Mutual Exclusion, Optimization, Parallelism, Shared-Memory Multiprocessors",
    author = "Kazunori Ueda and Masao Morita",
    year = "1993",
    month = "9",
    doi = "10.1007/BF03037181",
    language = "English",
    volume = "11",
    pages = "323--341",
    journal = "New Generation Computing",
    issn = "0288-3635",
    publisher = "Springer Japan",
    number = "3-4",

    }

    TY - JOUR

    T1 - Message-oriented parallel implementation of Moded Flat GHC

    AU - Ueda, Kazunori

    AU - Morita, Masao

    PY - 1993/9

    Y1 - 1993/9

    N2 - We proposed in Ref. 5) a new, message-oriented implementation technique for Moded Flat GHC that compiled unification for data transfer into message passing. The technique was based on constraint-based program analysis, and significantly improved the performance of programs that used goals and streams to implement reconfigurable data structures. In this paper we discuss how the technique can be parallelized. We focus on a method for shared-memory multiprocessors, called the shared-goal method, though a different method could be used for distributed-memory multiprocessors. Unlike other parallel implementations of concurrent logic languages which we call process-oriented, the unit of parallel execution is not an individual goal but a chain of message sends caused successively by an initial message send. Parallelism comes from the existence of different chains of message sends that can be executed independently or in a pipelined manner. Mutual exclusion based on busy waiting and on message buffering controls access to individual, shared goals. Typical goals allow last-send optimization, the message-oriented counterpart of last-call optimization. We have built an experimental implementation on Sequent Symmetry. In spite of the simple scheduling currently adopted, preliminary evaluation shows good parallel speedup and good absolute performance for concurrent operations on binary process trees.

    AB - We proposed in Ref. 5) a new, message-oriented implementation technique for Moded Flat GHC that compiled unification for data transfer into message passing. The technique was based on constraint-based program analysis, and significantly improved the performance of programs that used goals and streams to implement reconfigurable data structures. In this paper we discuss how the technique can be parallelized. We focus on a method for shared-memory multiprocessors, called the shared-goal method, though a different method could be used for distributed-memory multiprocessors. Unlike other parallel implementations of concurrent logic languages which we call process-oriented, the unit of parallel execution is not an individual goal but a chain of message sends caused successively by an initial message send. Parallelism comes from the existence of different chains of message sends that can be executed independently or in a pipelined manner. Mutual exclusion based on busy waiting and on message buffering controls access to individual, shared goals. Typical goals allow last-send optimization, the message-oriented counterpart of last-call optimization. We have built an experimental implementation on Sequent Symmetry. In spite of the simple scheduling currently adopted, preliminary evaluation shows good parallel speedup and good absolute performance for concurrent operations on binary process trees.

    KW - Concurrent Logic Programming

    KW - GHC

    KW - Moded Flat GHC

    KW - Mutual Exclusion

    KW - Optimization

    KW - Parallelism

    KW - Shared-Memory Multiprocessors

    UR - http://www.scopus.com/inward/record.url?scp=0040266352&partnerID=8YFLogxK

    UR - http://www.scopus.com/inward/citedby.url?scp=0040266352&partnerID=8YFLogxK

    U2 - 10.1007/BF03037181

    DO - 10.1007/BF03037181

    M3 - Article

    VL - 11

    SP - 323

    EP - 341

    JO - New Generation Computing

    JF - New Generation Computing

    SN - 0288-3635

    IS - 3-4

    ER -