Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Comparison of several parallel models

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

In a broad sense, parallelism can be divided into several categories, 1. Shared memory parallelism (explicit threads, such as Pthreads and Java threads); 2. Shared memory parallelism (task / data parallelism, such as OpenMP); 2. Distributed parallelism (explicit communication, such as MPI, SHMEM, and Global Arrays); 3. Distributed parallelism (special global access, such as Co-Array Fortran, UPC)

Specifically,

Pthreads is a programming model of shared memory, and parallelization is realized by calling functions in parallel. A parallel function body is executed synchronously by multiple threads, all of which can access shared global data. Pthreads is the underlying implementation of many parallel models.

Java is a general programming language that supports parallelization in the form of threads. Parallel Java programs run on shared memory processors, which are very similar to Pthreads programs. Pthreads and Java exist only on shared memory processors.

OpenMP is also a shared memory model, and its parallelization is achieved by defining parallel annotations (parallel directives) for loops and functions. The OpenMP annotation can indicate which loop parts can be executed in parallel, as well as functions that can be executed in parallel. Other comments are used to indicate shared or private data for a process. The compiler can translate OpenMP programs into programs like Pthreads, in which parallel loop bodies are translated into parallel functions. OpenMP is an industry-standard parallelization library supported by multiple languages and platforms. OpenMP currently applies only to shared memory processors.

MPI is a distributed memory model, and its threads need to communicate explicitly, which is based on the MPI runtime library to send and receive data. MPI is widely adopted and can be found on any parallel platform. Its performance has also been tuned. Although it takes some programming to implement, MPI is currently the most popular parallel mode because of its portability and performance.

Many scientific applications have very regular memory access patterns, so they can be easily parallelized. Three typical applications are: 1. Irregular table access, involving many parallel database operations; 2. Irregular dynamic access involves sparse data structures, such as finding the principal eigenvalues of n-order sparse matrices by conjugate gradient method; 3. Memory sort in-memory sorting.

Source:

Evaluating the Imapct of Programming Language Features on the Performance of Parallel Applications on Cluster Architectures (by Konstantin Berlin, et al.)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report