Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build parallel Computing MPI Environment based on Intel

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article is about how to build a parallel computing MPI environment based on Intel. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

MPI is a library, not a language. Many people think that MPI is a parallel language, which is not accurate. However, according to the classification of parallel languages, FORTRAN+MPI or C+MPI can be regarded as an extension based on the original serial language, and the parallel language MPI library can be called by FORTRAN77/C/Fortran90/C++. Syntactically, it obeys all the rules for calling library functions / procedures, and is no different from general functions / procedures.

MPI has been implemented on IBM PC, MS Windows, all major Unix workstations and all mainstream parallel computers. C or Fortran parallel programs that use MPI for message passing can run unchanged on IBM PC, MS Windows, Unix workstations, and various parallel computers.

High-performance parallel computing computer technology has been highly valued at home and abroad because of its huge numerical computing and data processing capabilities. It has made great achievements in scientific research, engineering technology and military applications. Parallel computing is a method to solve the problem by decomposing a large computing problem into many independent and related sub-problems, and then hashing them to each node to execute in parallel.

I. installation environment

CentOS 6.4 minimizes graphical installation, ensures that sshd services on each node can be started normally, ensures that the firewall and selinux have been turned off, the required software: Intel_ Fortran,Intel_C++,Intel_ MPI

Set to log in without a password through the hostname

1. Access through hostname

Assign IP address to each node, assign IP address * continuously, configure / etc/hosts file, and realize the corresponding resolution of IP address and machine. You can use the same / etc/hosts file on all machines, which contains the following form:

10.12.190.183 dell 10.12.190.187 lenovo.

2. Password-free access between computing nodes

Suppose A (110.12.190.183) is the client machine and B (10.12.190.187) is the target machine. Choose rsa | dsa as the encryption method. Default is rsa.

# ssh-keygen-t rsa # uses rsa encryption. The default is rsa encryption.

The display information is as follows, if you are asked by the system, you can enter directly.

Generating public/private rsa key pair. Enter file in which to save the key (/ root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in / root/.ssh/id_rsa. Your public key has been saved in / root/.ssh/id_rsa.pub. The key fingerprint is: b3:8e:87:44:71:67:81:06:d2:51:01:a4:f8:74:78:fd root@kvm.local The key's randomart image is: +-- [RSA 2048]-+ | .o = = oo.. | |. + o. + o | |. + oint.o | | o.. |. Se | |. O | |. .. | | .o. | |. | +-+

The key file and the private key file id_rsa,id_rsa.pub will be generated.

Copy the ~ / .ssh file of A to the .ssh directory of machine B, and cat id_rsa.pub > > ~ / .ssh/authorized_keys. In the same way, copy the ~ / .ssh/ id_rsa.pub of B to machine A.

Set authorized_keys permissions:

# chmod 600 authorized_keys

Set .ssh directory permissions:

# chmod 700R .ssh

If you use the ssh-copy-id command, you can more easily copy the local host's public key to the remote host's authorized_keys file, and ssh-copy-id will also set appropriate permissions to the remote host's user home directory (home) and ~ / .ssh, and ~ / .ssh/authorized_keys.

# ssh-copy-id-I ~ / .ssh/id_rsa.pub 10.12.190.187

After completing the above operations, users do not need to use a password from the local machine to the remote machine, and the permissions of files and directories must not be set to chmod 777. This permission is too large to be secure, and digital signatures are not supported.

You can also use a machine to generate ssh-keygen and copy the .ssh directory to each node along with id_rsa,authorized_keys. Check whether you can log in to other nodes directly (no password is required, enter yes enter when asked):

Ssh node1 ssh node2 ssh node3 ssh node4

If you can log in to other nodes without a password between any two nodes, the configuration is successful.

III. Configuration of NFS file system

An example of the method is as follows (assuming that the NFS server IP is 10.12.190.183, and the configuration needs to be completed under the root user):

Server-side configuration method (the following configuration is done only on the primary node):

1. / etc/exports file configuration

Add the following lines to the file / etc/exports:

/ home/cluster 10.12.190.183 (rw,sync,no_root_squash) / home/cluster 10.12.190.185 (rw,sync,no_root_squash) / home/cluster 10.12.190.187 (rw,sync,no_root_squash)

These lines indicate that the NFS server shares its / home/cluster directory with three nodes with an IP address of 10.12.190.183, 10.12.190.185, 10.12.190.185, and gives these nodes read and write access. If there are more nodes, you can fill in this method.

Then execute the following command to start port mapping:

# / etc/rc.d/init.d/rpcbind start (Note: in the * kernel, the NFS daemon is changed to rpcbind, if the old kernel starts the NFS daemon with the command service portmapstart)

* execute the following command to start the NFS service, and NFS will activate the daemon, and then start listening for requests on the Client:

# / etc/rc.d/init.d/nfs start # chkconfig nfs on

You can also restart the Linux server and start the NFS service automatically.

2. Client configuration method (the same configuration needs to be done on all child nodes)

Establish the same shared directory as the server for sharing server files:

Mkdir / usr/cluster

View the server's existing shared directory (this step can be omitted)

Showmount-e 10.12.190.183

With this command, we can see the directory that can be shared by the server with IP address 10.12.190.183.

Mount the shared directory:

Mount-t nfs 10.12.190.183:/home/cluster / home/cluster

This command mounts the shared directory on NFS server 10.12.190.183 to the local / home/cluster directory. We can also enter the following code in the / etc/fstab file of all child nodes to enable the file system to automatically mount NFS at startup:

10.12.190.183:/home/cluster / home/cluster nfs defaults 0 0

At this point, we can achieve local access to the NFS shared directory, the / home/cluster folders of all child nodes share the contents of the NFS server folder with the same name, and we can access the shared files as if we were local files. The folders where users store parallel programs can be shared by NFS, thus avoiding sending copies of the programs to each node each time.

Fourth, install Intel_C++

# tar xvzf l_ccompxe_2013.1.117.tgz # cd composer_xe_2013.1.117 #. / install.sh

Set the environment variable:

# vi / etc/profile

Add a line source / opt/intel/composer_xe_2013.1.117/bin/iccvars.sh intel64

Test environment variable settings:

# which icc

If you can see / opt/intel/composer_xe_2013.1.117/bin/intel64/icc, then the installation setup is successful.

Install the Intel_Fortran compiler

# tar fxvz l_fcompxe_2013.1.117.tgz # cd l_fcompxe_2013.1.117 #. / install.sh

Set the environment variable:

# vi / etc/profile

Add a line source / opt/intel/composer_xe_2013.1.117/bin/compilervars.sh intel64.

Test environment variable settings:

# which ifort

If you can see / opt/intel/composer_xe_2013.1.117/bin/intel64/ifort, then the installation setup is successful.

6. Configure and install Intel_MPI

1. Installation and setup

Be sure to install the Fortran compiler before installation. Putting C++ into * has no effect. For other versions of MPI, you need to install the first two softwares first, set the environment variables, and * * install MPI programs, such as open source mpich.

# tar fxvz l_mpi_p_4.1.3.045.tgz # cd l_mpi_p_4.1.3.045 #. / install.sh

Set the environment variable:

# vi / etc/profile

Add a line source / opt/intel/impi/4.1.3.045/bin64/mpivars.sh.

Test environment variable settings:

# which mpd # which mpicc # which mpiexec # which mpirun

If you can see the path description of all the commands, the installation settings are successful.

Modify the / etc/mpd.conf file to either secretword=myword or MPD_SECRETWORD=myword:

Vi / etc/mpd.conf

Set the file read permission so that only you can read and write:

Chmod 600 / etc/mpd.conf

Non-root users create .mpd.conf with the same content in the home directory, and each node needs to add:

Create a hostname collection file / root/mpd.hosts:

# vi mpd.hosts

The contents of the document are as follows:

Dell # hostname 1, (hostname: number can specify the number of processes started by the node dell:2) lenovo # hostname 2

Before running the MPI application on the coprocessor, copy the MPI library to the directory below all nodes on the system.

# scp / opt/intel/impi/4.1.3.045/mic/bin/* dell:/bin/ mpiexec 100% 1061KB 1.0MB/s 00:00 pmi_proxy 100% 871KB 871.4KB/s 00:00. # scp / opt/intel/impi/4.1.3.045/mic/lib/* dell:/lib64/ libmpi.so.4.1 100% 4391KB 4.3MB/s 00:00 libmpigf.so.4.1 100% 321KB 320.8KB/s 00:00 libmpigc4.so.4.1 100% 175KB 175.2KB/s 00:00. # scp / opt/intel/composer_xe_2013_sp1.0.080/compiler/lib/mic/* dell:/lib64/ libimf.so 100% 2516KB 2.5MB/s 00:01 libsvml.so 100% 4985KB 4.9MB/s 00:01 libintlc.so.5 100% 128KB 128.1KB/s 00:00.

The above are the operation steps of the intel official documents, personally feel more troublesome, did not use this method.

Http://software.intel.com/en-us/articles/using-the-intel-mpi-library-on-intel-xeon-phi-coprocessor-systems

What I use is to set / opt to the nfs share, then mount it to the same directory under each node, and set the environment variable. The server must install all the Intel_ Fortran,Intel_C++,Intel_ MPI, each node only needs MPI, after the server compiles the program, it is distributed to each node, and then the program can be operated in parallel through the MPI interface.

2. How to use MPI

MPI uses mpd services to manage processes and runs mpi programs using mpiexec or mpirun.

Start the mpd service on the stand-alone:

# mpd &

View the mpd service:

# mpdtrace View hostname # mpdtrace-l View hostname and port number

Turn off mpd process management:

# mpdallexit

Test the mpi program and compile the mpi file (- o Hello specifies the name of the output file):

# mpicc-o Hello Hello.c / / generate executable text file Hello # mpicc cpi.c / / default output file name a.out # mpdrun-np 4. / a.out / /-n or-np processes started # mpiexec [- h or-help or-- help] / / View help file

Start the mpd service on the cluster:

# mpdboot-n process-num-f mpd.hosts

Start the process-num process, where mpd.hosts is the file you created earlier.

By default, mpi uses ssh to log in to other machines in the cluster, or you can use rsh to log in to other machines in the cluster to start the mpd service.

You can specify either ssh or rsh by using the-rsh option:

# mpdboot-- rsh=rsh-n process-num-f hostfile or # mpdboot-- rsh=ssh-n process-num-f hostfile

Shut down the mpd service:

# mpdallexit

Use MPIEXEC | MPIRUN to execute mpi tasks:

# mpiexec-np 4. / a.out / / a.out all nodes need to have a.out files or mpiexec-machinefile filename-np 4. / a.out under the same path

Http://blog.sina.com.cn/s/blog_605f5b4f0100sw3j.html

The running results are as follows:

[root@kvm] # mpiexec-np 4. / a.out Process 0 of 4 is on dell Process 2 of 4 is on dell Process 3 of 4 is on kvm.local Process 1 of 4 is on kvm.local pi is approximately 3.1415926544231274, Error is 0.00000008333343 wall clock time = 0.037788

Note: after the above environment variables are set, you need to restart, or use the source command to re-execute the file.

Thank you for reading! This is the end of this article on "how to build a parallel computing MPI environment based on Intel". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report