FRIBParallelanalysis
1.0
FrameworkforMPIParalleldataanalysisatFRIB
|
#include <AbstractApplication.h>
Public Member Functions | |
AbstractApplication (int argc, char **argv) | |
virtual | ~AbstractApplication () |
virtual void | operator() (CParameterReader ¶mReader) |
virtual void | dealer (int argc, char **argv, AbstractApplication *pApp)=0 |
virtual void | farmer (int argc, char **argv, AbstractApplication *pApp)=0 |
virtual void | outputter (int argc, char **argv, AbstractApplication *pApp)=0 |
virtual void | worker (int argc, char **argv, AbstractApplication *pApp)=0 |
MPI_Datatype & | messageHeaderType () |
MPI_Datatype & | requestDataType () |
MPI_Datatype & | parameterHeaderDataType () |
MPI_Datatype & | parameterValueDataType () |
MPI_Datatype & | parameterDefType () |
MPI_Datatype & | variableDefType () |
unsigned | numWorkers () |
void | forwardPassThrough (const void *pData, size_t nBytes) |
int | getRequest () |
void | sendEofs () |
void | sendEof () |
void | requestData (size_t maxBytes) |
void | throwMPIError (int status, const char *reason) |
AbstractApplication (int argc, char **argv) | |
Protected Member Functions | |
int | getArgc () const |
char ** | getArgv () |
void | makeDataTypes () |
This class is a strategy pattern for the dealer/worker/farmer/outputter parallel execution pattern implemented in MPI. If an application has MPI rank n it will allocate them as follows:
The dealer, rank 0 is connected to the data source. Workers send requests to it for work items the dealer respondes either with data or an end message indicating there is no more data. When the dealer has sent ends to all of the workers it will call MPI_Finalize to do our part to exit the application.
The Farmer gets messags from the workers and re-orders them into the original order. When the dealer sends a work item it will actually be sent as a pair of messages. The first message will contain a work item number and a payload size while the second message will contain a size of the following message (sent as an array of chars). The Farmer orders the work items by item number and sends them to the Outputtter. End messages from the workers are counted and when the farmer has gotten an end message from all workers it will send an end message to the outputter call MPI_Finalize to do its part to exit.
The outputter gets messages from the farmer that are either results or end marks. The outputter is connected to the data sink. It will write the messages from the farmer to the data sink. When it receives an end messages, it will call MPI_Finalize ot do its part to exit.
Workers are responsible for actually doing the application specific operations on the data. They request and receive blocks of data to operate on from the Dealer. They transform those blocks into output data which they then send to the Farmer for re-ordering. If the dealer sends them a worker and end message, it will send and end message to the farmer and call MPI_Finalize to do its part to exit.
Each process type is implemented as a pure virtual method. The function call operator():
A typical use of this class woud be to:
* * using namespace frib::analysis; * int main(argc, char**argv) { * // define a MyApplication deriveds, concrete class class from * // AbstractApplication * * CTCLParameterReader configReader("configFile.tcl"); * MyApplication app(argc, argv); * app(configReader); * * // When we get here our role has called MPI_Finalize * * exit(EXIT_SUCCESS); * } * *
AbstractApplication::AbstractApplication | ( | int | argc, |
char ** | argv | ||
) |
constructor
argc | -number of command line arguments. |
argv | - pointer to the command line arguments. |
|
virtual |
destructor
void AbstractApplication::forwardPassThrough | ( | const void * | pData, |
size_t | nBytes | ||
) |
forwardPassThrough Send bytes without any real interpretation to the output
pData | - data to send. |
nBytes | - number of bytes to send. |
|
protected |
getArgc
///////////////////////////// Utility methods for the subclasses ////////
/** getArgc @return int - number of command line parameters after MPI_Init is done modifying them.
|
protected |
getArgv
int AbstractApplication::getRequest | ( | ) |
getRequest Receive a request from a worker and return the rank of the sender.
|
protected |
makeDataTypes Creates any MPI custom data types we need. This sets member data datatypes. Getters exist to fetch references to the data types we've created.
MPI_Datatype & AbstractApplication::messageHeaderType | ( | ) |
messageHeaderType Returns a reference to the MPI type item for a message header:
unsigned AbstractApplication::numWorkers | ( | ) |
return the number of worker processes in the application. This is just size-3 (dealer, farmer, outputer).
|
virtual |
operator() Entry point to the MPI pattern.
paramReader | - object that knows how to read the parameter file. |
MPI_Datatype & AbstractApplication::parameterDefType | ( | ) |
parameterDefType
MPI_Datatype & AbstractApplication::parameterHeaderDataType | ( | ) |
parameterHeaderDataType
MPI_Datatype & AbstractApplication::parameterValueDataType | ( | ) |
parameterValueDataType
void AbstractApplication::requestData | ( | size_t | maxBytes | ) |
requestData Send a request for data to the dealer
maxBytes | - maxium payload we want to accept. |
MPI_Datatype & AbstractApplication::requestDataType | ( | ) |
requestDataType returns a reference to the MPI type item for a data request record.
void AbstractApplication::sendEof | ( | ) |
get a request and send an EOF to it.
void AbstractApplication::sendEofs | ( | ) |
sendEofs Send all the EOFS to workers.
void AbstractApplication::throwMPIError | ( | int | status, |
const char * | prefix | ||
) |
throwMPIError Analyzes an MPI call status return throwing a runtime error if the status is not normal
status | - status from the MPI call. |
prefix | - Prefix to the error text from status |
MPI_Datatype & AbstractApplication::variableDefType | ( | ) |
variableDefType