User Tools

Site Tools


c:mpi:hello-world

MPI Hello World

mpi-hello-world.c
#include <mpi.h>
#include <stdio.h>
 
int main(int argc, char** argv) {
    // Initialize the MPI environment
    MPI_Init(NULL, NULL);
 
    // Get the number of processes
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);
 
    // Get the rank of the process
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
 
    // Check if the current process is the master process
    if (world_rank == 0) {
        // Master process prints this message
        printf("Hello world from master\n");
    } else {
        // All other processes print this message
        printf("Hello world from processor %d out of %d processors\n",
               world_rank, world_size);
    }
 
    // Finalize the MPI environment.
    MPI_Finalize();
}

The if (world_rank == 0) statement checks if the process is the master process (with rank 0).
If it is the master process, it prints “Hello world from master”.
If it is not the master process (else case), it prints “Hello world from processor” along with its rank and the total number of processors.

Unique Identifier: In MPI, each process is assigned a unique identifier known as its “rank.” The ranks are usually integers starting from 0 and assigned sequentially. The process with rank 0 is commonly referred to as the “master” process.

Role of the Master Process:

Coordination: The master process often takes on a coordinating role. It might be responsible for distributing work among the other processes (often called “worker” processes or “slaves”), gathering results from them, and managing overall task flow.

Initial Operations: Sometimes, the master process performs initial setup operations, like reading input data and distributing parts of it to other processes.

Communication Hub: In some architectures, the master process acts as a central hub for communication, where it receives and sends data to and from worker processes.

Not Always Special: It's important to note that in many MPI programs, all processes, including the master, perform similar work. The designation of a master process is more about the role it plays in a specific program's architecture than about any intrinsic property of the process itself. In some parallel programs, the master process does computation just like the worker processes.

Scalability and Efficiency: The concept of a master process helps in managing and scaling complex computations across multiple nodes in a cluster. However, in some cases, having a single master process can become a bottleneck, especially for very large-scale computations. Hence, more distributed or decentralized approaches may be used in those scenarios.

Implementation Choice: Whether to use a master process and how it is used depends on the specific requirements of the application and the design choices made by the programmer. There is no mandatory rule in MPI that you must have a master process; it's more of a common pattern used in parallel programming.

c/mpi/hello-world.txt · Last modified: 2024/01/17 10:20 by odefta