Parallel Computation using Message Passing Interface (MPI)
Parallel Computation using Message Passing Interface (MPI)
No Thumbnail Available
Date
2015-04-27
Authors
Elamin Hamoda, Salma
Journal Title
Journal ISSN
Volume Title
Publisher
UOFK
Abstract
As a programmer, you may need to solve ever larger, more memory intensive problems, or simply solve problems with greater speed than is possible on a serial computer. You can turn to parallel programming and parallel computers to satisfy these needs. Using parallel programming methods on parallel computers gives you access to greater memory and central processing unit (CPU) resources, which are not available on serial computers. Hence, you are able to solve large problems whose solutions may not have been possible otherwise, as well as solve problems more quickly.
One of the basic methods of programming for parallel computing is the use of message passing libraries. These libraries manage to transfer data between instances of parallel program running usually executes on multiple processors in a parallel computing architecture.
This thesis studies the parallel processing concepts and one of the most used libraries referred to as the Message Passing Interface (MPI).
The thesis first introduces the parallel processing concept, then discusses the fundamentals of message passing and the environments used such as Local Area Multicomputer (LAM). The thesis focuses mainly on the concepts of MPI programming.
A program is designed using C++ language to investigate the features of MPI libraries when used to solve Fast Fourier Transform functions using one, two, four, and eight processes running on a LAM communicator.
Finally the results obtained are compared to ensure the feature of using parallel computation for solving specific problem and also the limits of increasing the number of running processes with regards to the efficiency.
Description
104page