[mpich-discuss] beginner for remote mem access

Yi Gu gyi at mtu.edu
Wed Jul 3 13:26:29 CDT 2013


Hi, sufeng:

I am not sure the second problem is, but I think
MPI_Win_create(MPI_BOTTOM, 0, sizeof(float), MPI_INFO_NULL, MPI_COMM_WORLD,
&win);
should be
MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win);

For the third question, if they need to access, yes.

Yi


On Wed, Jul 3, 2013 at 1:18 PM, Yi Gu <gyi at mtu.edu> wrote:

> Hi, sufeng:
>
> I think you may first initialize array first, since you print out b
> without initialization,
> it could print out anything.
>
> Yi
>
>
> On Wed, Jul 3, 2013 at 1:12 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
>
>> Hi,
>>
>> I am a beginner and just try to use remote memory access, and I wrote a
>> simple program to test it:
>>
>> #include "mpi.h"
>> #include <stdio.h>
>> #define SIZE 8
>>
>> int main(int argc, char *argv[])
>> {
>>         int numtasks, rank, source=0, dest, tag=1, i;
>>         float a[64] =
>>         {
>>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
>> 12.0, 13.0, 14.0, 15.0, 16.0,
>>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
>> 12.0, 13.0, 14.0, 15.0, 16.0,
>>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
>> 12.0, 13.0, 14.0, 15.0, 16.0,
>>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
>> 12.0, 13.0, 14.0, 15.0, 16.0,
>>         };
>>         float b[SIZE];
>>
>>         MPI_Status stat;
>>
>>         MPI_Init(&argc,&argv);
>>         MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>         MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
>>
>>         MPI_Win win;
>>
>>         // check processor rank
>>         char processor_name[MPI_MAX_PROCESSOR_NAME];
>>         int name_len;
>>         MPI_Get_processor_name(processor_name, &name_len);
>>         printf("-- processor %s, rank %d out of %d processors\n",
>> processor_name, rank, numtasks);
>>
>>         MPI_Barrier(MPI_COMM_WORLD);
>>
>>         if (numtasks == 4) {
>>                 if (rank == 0) {
>>                         printf("create window \n");
>>                         MPI_Win_create(a, 8*sizeof(float), sizeof(float),
>> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
>>
>>                 }
>>                 else {
>>                         MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
>> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
>>                 }
>>
>>                 MPI_Win_fence(0, win);
>>
>>                 if (rank == 1){
>>                         MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
>> MPI_FLOAT, win);
>>
>>                         MPI_Win_fence(0, win);
>>                 }
>>
>>                 printf("rank= %d  b= %3.1f %3.1f %3.1f %3.1f %3.1f %3.1f
>> %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
>>         }
>>         else
>>                 printf("Must specify %d processors. Terminating.\n",SIZE);
>>
>>         MPI_Win_free(&win);
>>         MPI_Finalize();
>> }
>>
>> However the terminal gives some odd results:
>> rank= 0  b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
>> rank= 2  b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
>> rank= 3  b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
>> rank= 1  b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
>> the rank 1 is correct results, but others should be all zero.
>> terminal also give some comments: "Assertion failed in file
>> src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >= 0
>> internal ABORT - process 0"
>>
>> another question is if I use remote memory access. all process which does
>> not create window for share must add additional line:
>> MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL, MPI_COMM_WORLD,
>> &win); correct?
>>
>> Thanks a lot!
>>
>> --
>> Best Regards,
>> Sufeng Niu
>> ECASP lab, ECE department, Illinois Institute of Technology
>> Tel: 312-731-7219
>>
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130703/362a0cda/attachment.html>


More information about the discuss mailing list