[mpich-discuss] first exit problems

Antonio J. Peña apenya at mcs.anl.gov
Mon Jul 8 11:20:53 CDT 2013

Maybe you could use a root process to drive others'. Let's say your root is 
rank 0. All ranks just communicate with rank 0. If any process finds the 
solution, it notifies rank 0, who broadcasts the exit message. The root process 
may just discard a second notify from other process who found a solution 
before receiving the exit message. This way the root process can handle ties 
and avoid race conditions.


On Monday, July 08, 2013 11:02:48 AM Gideon Simpson wrote:
> Hi, I was wondering if someone had a good MPI solution to the following
> first passage problem.  Suppose I have N workers, each using a Monte Carlo
> method to solve a problem.  An example would be if I wanted to find the
> exit distribution in time and space of a stochastic differential equation
> from some compact set in space.
> Assuming that each worker is properly receiving an independent stream of
> pseudo random numbers, and they are computing asynchronously, when one
> worker finds a solution, he must notify the others that they can cease
> working, and the program can continue.  I have in mind something like:
> while (!local_exit & !global_exit){
> 	/*Check if a message has been received*/
> 	MPI_Iprobe(...,&global_exit,...);
> 	/*If no one else finished, continue the algorithm */
> 	if(!global_exit){
> 		/* Step algorithm */
> 		local_exit = exit_test(...);
> 	}
> }
> if(local_exit){
> 	/* Alert all other workers */
> 	for(...){
> 		MPI_Isend(...);
> 	}
> }
> else{
> 	/* Receive the message that was sent from the other worker */
> 	MPI_Recv(...);
> }
> However, I believe this leads to a race condition because if processor 0
> finds and exit, processor 1 might also find one before processor 0 has a
> chance to do an MPI_Isend() to 1.  Assuming that, for the moment, I am not
> too concerned about tie breaking, I would really like to find a robust
> solution to this problem, as I have a lot of first exit/first passage type
> problems that could be solved asynchronously.
> Thanks for any suggestions,
> -gideon
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss

More information about the discuss mailing list