[mpich-discuss] beginner for remote mem access (sufeng)

Sufeng Niu sniu at hawk.iit.edu
Fri Jul 5 10:28:03 CDT 2013


Hi, Jeff

Thanks a lot for your reply. Sorry about subject issue.

1.
MPI_Win_fence is not similar to MPI_Barrier in some cases.  The
MPICH implementation can turn some calls into no-ops and others into
MPI_Reduce_scatter.
I am not quite sure about "turn some calls into no-ops and others into
MPI_Reduce_scatter" Could you please give an example if possible? Another
thing is if a process create a window for other process, but the data
access is available after some operations. Should I use MPI_Win_fence or
MPI_Barrier to sync? or other methods?

2.
Regarding "use MPI window as thread level", I really don't
understand your thinking at all.  MPI RMA is not shared memory nor is
MPI a threading model.
Sorry for uncleared statements, right now I would like to do multithreads
and MPI hybrid programming. let me give an example: I have 3 processes,
each one has 8 threads. thread 0 in process 0 creates a RMA window. if I
would like all other threads to access it, should I use thread 0 in process
1, thread 0 in process 2 to MPI_Get the data from window, then use shared
memory for internal threads to load the data? i am not sure what is the
proper way for RMA in hybrid model.

Thank you so much!

Sufeng


On Fri, Jul 5, 2013 at 6:15 AM, <discuss-request at mpich.org> wrote:

> Send discuss mailing list submissions to
>         discuss at mpich.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://lists.mpich.org/mailman/listinfo/discuss
> or, via email, send a message with subject or body 'help' to
>         discuss-request at mpich.org
>
> You can reach the person managing the list at
>         discuss-owner at mpich.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of discuss digest..."
>
>
> Today's Topics:
>
>    1. Re:  beginner for remote mem access (Jeff Hammond)
>    2.  Running configure on Suse Enterprise 10
>       (anton.s.murfin at sellafieldsites.com)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 3 Jul 2013 14:50:44 -0500
> From: Jeff Hammond <jeff.science at gmail.com>
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] beginner for remote mem access
> Message-ID:
>         <CAGKz=uJN2iH3Pxjxyq=YzrP=
> XG5Tp2+huxmBFTh+Lr7nVdLU4w at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 1) When you reply to the digest emails, change the subject to that of
> the thread you care to respond to.  That will make it much easier for
> others to track the thread.
> 2) MPI_Win_fence is not similar to MPI_Barrier in some cases.  The
> MPICH implementation can turn some calls into no-ops and others into
> MPI_Reduce_scatter.
> 3) Regarding "use MPI window as thread level", I really don't
> understand your thinking at all.  MPI RMA is not shared memory nor is
> MPI a threading model.
>
> Jeff
>
> On Wed, Jul 3, 2013 at 2:46 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
> >
> > Thanks a lot for your guys reply. I should initial b array, I figure out
> the
> > MPI_Win_fence should be called by all processes, which will solve the
> > assertion failed issue.
> >
> > it seems that MPI_Win_fence is very similar to MPI_Barrier, Can I say
> > MPI_Win_fence is the RMA version of MPI_Barrier?
> >
> > sorry about so many questions. Can I use MPI window as thread level?
> let's
> > say I have 4 processes, each process has 8 threads. if only one thread is
> > used to create mem access window, other process copy it and internal 8
> > threads use shared memory to perform some operations. Is that ok?
> >
> > Thanks a lot!
> >
> >
> >
> > On Wed, Jul 3, 2013 at 1:28 PM, <discuss-request at mpich.org> wrote:
> >>
> >> Send discuss mailing list submissions to
> >>         discuss at mpich.org
> >>
> >> To subscribe or unsubscribe via the World Wide Web, visit
> >>         https://lists.mpich.org/mailman/listinfo/discuss
> >> or, via email, send a message with subject or body 'help' to
> >>         discuss-request at mpich.org
> >>
> >> You can reach the person managing the list at
> >>         discuss-owner at mpich.org
> >>
> >> When replying, please edit your Subject line so it is more specific
> >> than "Re: Contents of discuss digest..."
> >>
> >>
> >> Today's Topics:
> >>
> >>    1.  beginner for remote mem access (Sufeng Niu)
> >>    2. Re:  beginner for remote mem access (Yi Gu)
> >>    3. Re:  beginner for remote mem access (Yi Gu)
> >>    4. Re:  beginner for remote mem access (Jeff Hammond)
> >>
> >>
> >> ----------------------------------------------------------------------
> >>
> >> Message: 1
> >> Date: Wed, 3 Jul 2013 13:12:46 -0500
> >> From: Sufeng Niu <sniu at hawk.iit.edu>
> >> To: discuss at mpich.org
> >> Subject: [mpich-discuss] beginner for remote mem access
> >> Message-ID:
> >>
> >> <CAFNNHkwxqXUB_+b8_tZ=L5K7VcDVhsz4ZMDDMGF55v-FAhgeqg at mail.gmail.com>
> >> Content-Type: text/plain; charset="iso-8859-1"
> >>
> >> Hi,
> >>
> >> I am a beginner and just try to use remote memory access, and I wrote a
> >> simple program to test it:
> >>
> >> #include "mpi.h"
> >> #include <stdio.h>
> >> #define SIZE 8
> >>
> >> int main(int argc, char *argv[])
> >> {
> >>         int numtasks, rank, source=0, dest, tag=1, i;
> >>         float a[64] =
> >>         {
> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >>         };
> >>         float b[SIZE];
> >>
> >>         MPI_Status stat;
> >>
> >>         MPI_Init(&argc,&argv);
> >>         MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >>         MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
> >>
> >>         MPI_Win win;
> >>
> >>         // check processor rank
> >>         char processor_name[MPI_MAX_PROCESSOR_NAME];
> >>         int name_len;
> >>         MPI_Get_processor_name(processor_name, &name_len);
> >>         printf("-- processor %s, rank %d out of %d processors\n",
> >> processor_name, rank, numtasks);
> >>
> >>         MPI_Barrier(MPI_COMM_WORLD);
> >>
> >>         if (numtasks == 4) {
> >>                 if (rank == 0) {
> >>                         printf("create window \n");
> >>                         MPI_Win_create(a, 8*sizeof(float),
> sizeof(float),
> >> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >>
> >>                 }
> >>                 else {
> >>                         MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> >> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >>                 }
> >>
> >>                 MPI_Win_fence(0, win);
> >>
> >>                 if (rank == 1){
> >>                         MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
> MPI_FLOAT,
> >> win);
> >>
> >>                         MPI_Win_fence(0, win);
> >>                 }
> >>
> >>                 printf("rank= %d  b= %3.1f %3.1f %3.1f %3.1f %3.1f %3.1f
> >> %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> >>         }
> >>         else
> >>                 printf("Must specify %d processors.
> Terminating.\n",SIZE);
> >>
> >>         MPI_Win_free(&win);
> >>         MPI_Finalize();
> >> }
> >>
> >> However the terminal gives some odd results:
> >> rank= 0  b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> >> rank= 2  b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> >> rank= 3  b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> >> rank= 1  b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> >> the rank 1 is correct results, but others should be all zero.
> >> terminal also give some comments: "Assertion failed in file
> >> src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >= 0
> >> internal ABORT - process 0"
> >>
> >> another question is if I use remote memory access. all process which
> does
> >> not create window for share must add additional line:
> >> MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL, MPI_COMM_WORLD,
> >> &win); correct?
> >>
> >> Thanks a lot!
> >>
> >> --
> >> Best Regards,
> >> Sufeng Niu
> >> ECASP lab, ECE department, Illinois Institute of Technology
> >> Tel: 312-731-7219
> >> -------------- next part --------------
> >> An HTML attachment was scrubbed...
> >> URL:
> >> <
> http://lists.mpich.org/pipermail/discuss/attachments/20130703/91c4ec09/attachment-0001.html
> >
> >>
> >> ------------------------------
> >>
> >> Message: 2
> >> Date: Wed, 3 Jul 2013 13:18:26 -0500
> >> From: Yi Gu <gyi at mtu.edu>
> >> To: discuss at mpich.org
> >> Subject: Re: [mpich-discuss] beginner for remote mem access
> >> Message-ID:
> >>
> >> <CAE5iO_SNJKyC9sG3XbTu4vbS=boDrOEV-Y69jUXZR6MQDanUZA at mail.gmail.com>
> >> Content-Type: text/plain; charset="iso-8859-1"
> >>
> >> Hi, sufeng:
> >>
> >> I think you may first initialize array first, since you print out b
> >> without
> >> initialization,
> >> it could print out anything.
> >>
> >> Yi
> >>
> >>
> >> On Wed, Jul 3, 2013 at 1:12 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
> >>
> >> > Hi,
> >> >
> >> > I am a beginner and just try to use remote memory access, and I wrote
> a
> >> > simple program to test it:
> >> >
> >> > #include "mpi.h"
> >> > #include <stdio.h>
> >> > #define SIZE 8
> >> >
> >> > int main(int argc, char *argv[])
> >> > {
> >> >         int numtasks, rank, source=0, dest, tag=1, i;
> >> >         float a[64] =
> >> >         {
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >         };
> >> >         float b[SIZE];
> >> >
> >> >         MPI_Status stat;
> >> >
> >> >         MPI_Init(&argc,&argv);
> >> >         MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >> >         MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
> >> >
> >> >         MPI_Win win;
> >> >
> >> >         // check processor rank
> >> >         char processor_name[MPI_MAX_PROCESSOR_NAME];
> >> >         int name_len;
> >> >         MPI_Get_processor_name(processor_name, &name_len);
> >> >         printf("-- processor %s, rank %d out of %d processors\n",
> >> > processor_name, rank, numtasks);
> >> >
> >> >         MPI_Barrier(MPI_COMM_WORLD);
> >> >
> >> >         if (numtasks == 4) {
> >> >                 if (rank == 0) {
> >> >                         printf("create window \n");
> >> >                         MPI_Win_create(a, 8*sizeof(float),
> >> > sizeof(float),
> >> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >> >
> >> >                 }
> >> >                 else {
> >> >                         MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> >> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >> >                 }
> >> >
> >> >                 MPI_Win_fence(0, win);
> >> >
> >> >                 if (rank == 1){
> >> >                         MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
> >> > MPI_FLOAT,
> >> > win);
> >> >
> >> >                         MPI_Win_fence(0, win);
> >> >                 }
> >> >
> >> >                 printf("rank= %d  b= %3.1f %3.1f %3.1f %3.1f %3.1f
> %3.1f
> >> > %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> >> >         }
> >> >         else
> >> >                 printf("Must specify %d processors.
> >> > Terminating.\n",SIZE);
> >> >
> >> >         MPI_Win_free(&win);
> >> >         MPI_Finalize();
> >> > }
> >> >
> >> > However the terminal gives some odd results:
> >> > rank= 0  b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> >> > rank= 2  b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> >> > rank= 3  b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> >> > rank= 1  b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> >> > the rank 1 is correct results, but others should be all zero.
> >> > terminal also give some comments: "Assertion failed in file
> >> > src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >=
> 0
> >> > internal ABORT - process 0"
> >> >
> >> > another question is if I use remote memory access. all process which
> >> > does
> >> > not create window for share must add additional line:
> >> > MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL,
> MPI_COMM_WORLD,
> >> > &win); correct?
> >> >
> >> > Thanks a lot!
> >> >
> >> > --
> >> > Best Regards,
> >> > Sufeng Niu
> >> > ECASP lab, ECE department, Illinois Institute of Technology
> >> > Tel: 312-731-7219
> >> >
> >> > _______________________________________________
> >> > discuss mailing list     discuss at mpich.org
> >> > To manage subscription options or unsubscribe:
> >> > https://lists.mpich.org/mailman/listinfo/discuss
> >> >
> >> -------------- next part --------------
> >> An HTML attachment was scrubbed...
> >> URL:
> >> <
> http://lists.mpich.org/pipermail/discuss/attachments/20130703/4414c5cd/attachment-0001.html
> >
> >>
> >> ------------------------------
> >>
> >> Message: 3
> >> Date: Wed, 3 Jul 2013 13:26:29 -0500
> >> From: Yi Gu <gyi at mtu.edu>
> >> To: discuss at mpich.org
> >> Subject: Re: [mpich-discuss] beginner for remote mem access
> >> Message-ID:
> >>
> >> <CAE5iO_Rj7uTaVsEmJ1LBHe+OpTCJjLqpMmvBmjZRd=6WLWC=EQ at mail.gmail.com>
> >> Content-Type: text/plain; charset="iso-8859-1"
> >>
> >> Hi, sufeng:
> >>
> >> I am not sure the second problem is, but I think
> >> MPI_Win_create(MPI_BOTTOM, 0, sizeof(float), MPI_INFO_NULL,
> >> MPI_COMM_WORLD,
> >> &win);
> >> should be
> >> MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >>
> >> For the third question, if they need to access, yes.
> >>
> >> Yi
> >>
> >>
> >> On Wed, Jul 3, 2013 at 1:18 PM, Yi Gu <gyi at mtu.edu> wrote:
> >>
> >> > Hi, sufeng:
> >> >
> >> > I think you may first initialize array first, since you print out b
> >> > without initialization,
> >> > it could print out anything.
> >> >
> >> > Yi
> >> >
> >> >
> >> > On Wed, Jul 3, 2013 at 1:12 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> I am a beginner and just try to use remote memory access, and I
> wrote a
> >> >> simple program to test it:
> >> >>
> >> >> #include "mpi.h"
> >> >> #include <stdio.h>
> >> >> #define SIZE 8
> >> >>
> >> >> int main(int argc, char *argv[])
> >> >> {
> >> >>         int numtasks, rank, source=0, dest, tag=1, i;
> >> >>         float a[64] =
> >> >>         {
> >> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> >> >> 11.0,
> >> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> >> >> 11.0,
> >> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> >> >> 11.0,
> >> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >>                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> >> >> 11.0,
> >> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >>         };
> >> >>         float b[SIZE];
> >> >>
> >> >>         MPI_Status stat;
> >> >>
> >> >>         MPI_Init(&argc,&argv);
> >> >>         MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >> >>         MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
> >> >>
> >> >>         MPI_Win win;
> >> >>
> >> >>         // check processor rank
> >> >>         char processor_name[MPI_MAX_PROCESSOR_NAME];
> >> >>         int name_len;
> >> >>         MPI_Get_processor_name(processor_name, &name_len);
> >> >>         printf("-- processor %s, rank %d out of %d processors\n",
> >> >> processor_name, rank, numtasks);
> >> >>
> >> >>         MPI_Barrier(MPI_COMM_WORLD);
> >> >>
> >> >>         if (numtasks == 4) {
> >> >>                 if (rank == 0) {
> >> >>                         printf("create window \n");
> >> >>                         MPI_Win_create(a, 8*sizeof(float),
> >> >> sizeof(float),
> >> >> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >> >>
> >> >>                 }
> >> >>                 else {
> >> >>                         MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> >> >> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >> >>                 }
> >> >>
> >> >>                 MPI_Win_fence(0, win);
> >> >>
> >> >>                 if (rank == 1){
> >> >>                         MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
> >> >> MPI_FLOAT, win);
> >> >>
> >> >>                         MPI_Win_fence(0, win);
> >> >>                 }
> >> >>
> >> >>                 printf("rank= %d  b= %3.1f %3.1f %3.1f %3.1f %3.1f
> >> >> %3.1f
> >> >> %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> >> >>         }
> >> >>         else
> >> >>                 printf("Must specify %d processors.
> >> >> Terminating.\n",SIZE);
> >> >>
> >> >>         MPI_Win_free(&win);
> >> >>         MPI_Finalize();
> >> >> }
> >> >>
> >> >> However the terminal gives some odd results:
> >> >> rank= 0  b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> >> >> rank= 2  b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> >> >> rank= 3  b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> >> >> rank= 1  b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> >> >> the rank 1 is correct results, but others should be all zero.
> >> >> terminal also give some comments: "Assertion failed in file
> >> >> src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter
> >= 0
> >> >> internal ABORT - process 0"
> >> >>
> >> >> another question is if I use remote memory access. all process which
> >> >> does
> >> >> not create window for share must add additional line:
> >> >> MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL,
> MPI_COMM_WORLD,
> >> >> &win); correct?
> >> >>
> >> >> Thanks a lot!
> >> >>
> >> >> --
> >> >> Best Regards,
> >> >> Sufeng Niu
> >> >> ECASP lab, ECE department, Illinois Institute of Technology
> >> >> Tel: 312-731-7219
> >> >>
> >> >> _______________________________________________
> >> >> discuss mailing list     discuss at mpich.org
> >> >> To manage subscription options or unsubscribe:
> >> >> https://lists.mpich.org/mailman/listinfo/discuss
> >> >>
> >> >
> >> >
> >> -------------- next part --------------
> >> An HTML attachment was scrubbed...
> >> URL:
> >> <
> http://lists.mpich.org/pipermail/discuss/attachments/20130703/362a0cda/attachment-0001.html
> >
> >>
> >> ------------------------------
> >>
> >> Message: 4
> >> Date: Wed, 3 Jul 2013 13:28:31 -0500
> >> From: Jeff Hammond <jeff.science at gmail.com>
> >> To: discuss at mpich.org
> >> Subject: Re: [mpich-discuss] beginner for remote mem access
> >> Message-ID:
> >>
> >> <CAGKz=uLwki+cCY2oL_2vFwkwCi=A1Qd9VV1qXHno20gHxqY6iw at mail.gmail.com>
> >> Content-Type: text/plain; charset=ISO-8859-1
> >>
> >> Look at the MPICH test suite for numerous examples of correct RMA
> >> programs.
> >>
> >> Jeff
> >>
> >> On Wed, Jul 3, 2013 at 1:12 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
> >> > Hi,
> >> >
> >> > I am a beginner and just try to use remote memory access, and I wrote
> a
> >> > simple program to test it:
> >> >
> >> > #include "mpi.h"
> >> > #include <stdio.h>
> >> > #define SIZE 8
> >> >
> >> > int main(int argc, char *argv[])
> >> > {
> >> >         int numtasks, rank, source=0, dest, tag=1, i;
> >> >         float a[64] =
> >> >         {
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >                 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0,
> 11.0,
> >> > 12.0, 13.0, 14.0, 15.0, 16.0,
> >> >         };
> >> >         float b[SIZE];
> >> >
> >> >         MPI_Status stat;
> >> >
> >> >         MPI_Init(&argc,&argv);
> >> >         MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >> >         MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
> >> >
> >> >         MPI_Win win;
> >> >
> >> >         // check processor rank
> >> >         char processor_name[MPI_MAX_PROCESSOR_NAME];
> >> >         int name_len;
> >> >         MPI_Get_processor_name(processor_name, &name_len);
> >> >         printf("-- processor %s, rank %d out of %d processors\n",
> >> > processor_name, rank, numtasks);
> >> >
> >> >         MPI_Barrier(MPI_COMM_WORLD);
> >> >
> >> >         if (numtasks == 4) {
> >> >                 if (rank == 0) {
> >> >                         printf("create window \n");
> >> >                         MPI_Win_create(a, 8*sizeof(float),
> >> > sizeof(float),
> >> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >> >
> >> >                 }
> >> >                 else {
> >> >                         MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> >> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >> >                 }
> >> >
> >> >                 MPI_Win_fence(0, win);
> >> >
> >> >                 if (rank == 1){
> >> >                         MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
> >> > MPI_FLOAT,
> >> > win);
> >> >
> >> >                         MPI_Win_fence(0, win);
> >> >                 }
> >> >
> >> >                 printf("rank= %d  b= %3.1f %3.1f %3.1f %3.1f %3.1f
> %3.1f
> >> > %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> >> >         }
> >> >         else
> >> >                 printf("Must specify %d processors.
> >> > Terminating.\n",SIZE);
> >> >
> >> >         MPI_Win_free(&win);
> >> >         MPI_Finalize();
> >> > }
> >> >
> >> > However the terminal gives some odd results:
> >> > rank= 0  b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> >> > rank= 2  b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> >> > rank= 3  b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> >> > rank= 1  b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> >> > the rank 1 is correct results, but others should be all zero.
> >> > terminal also give some comments: "Assertion failed in file
> >> > src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >=
> 0
> >> > internal ABORT - process 0"
> >> >
> >> > another question is if I use remote memory access. all process which
> >> > does
> >> > not create window for share must add additional line:
> >> > MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL,
> MPI_COMM_WORLD,
> >> > &win); correct?
> >> >
> >> > Thanks a lot!
> >> >
> >> > --
> >> > Best Regards,
> >> > Sufeng Niu
> >> > ECASP lab, ECE department, Illinois Institute of Technology
> >> > Tel: 312-731-7219
> >> >
> >> > _______________________________________________
> >> > discuss mailing list     discuss at mpich.org
> >> > To manage subscription options or unsubscribe:
> >> > https://lists.mpich.org/mailman/listinfo/discuss
> >>
> >>
> >>
> >> --
> >> Jeff Hammond
> >> jeff.science at gmail.com
> >>
> >>
> >> ------------------------------
> >>
> >> _______________________________________________
> >> discuss mailing list
> >> discuss at mpich.org
> >> https://lists.mpich.org/mailman/listinfo/discuss
> >>
> >> End of discuss Digest, Vol 9, Issue 5
> >> *************************************
> >
> >
> >
> >
> > --
> > Best Regards,
> > Sufeng Niu
> > ECASP lab, ECE department, Illinois Institute of Technology
> > Tel: 312-731-7219
> >
> > _______________________________________________
> > discuss mailing list     discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 5 Jul 2013 12:15:22 +0100
> From: anton.s.murfin at sellafieldsites.com
> To: mpich-discuss at mcs.anl.gov
> Subject: [mpich-discuss] Running configure on Suse Enterprise 10
> Message-ID:
>         <
> OF382230A3.E1923C9D-ON80257B9F.0038EB2D-80257B9F.003DD531 at sellafieldsites.com
> >
>
> Content-Type: text/plain; charset="us-ascii"
>
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20130705/d6dc7b40/attachment.html
> >
> -------------- next part --------------
> An embedded and charset-unspecified text was scrubbed...
> Name: c.txt
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20130705/d6dc7b40/attachment.txt
> >
>
> ------------------------------
>
> _______________________________________________
> discuss mailing list
> discuss at mpich.org
> https://lists.mpich.org/mailman/listinfo/discuss
>
> End of discuss Digest, Vol 9, Issue 7
> *************************************
>



-- 
Best Regards,
Sufeng Niu
ECASP lab, ECE department, Illinois Institute of Technology
Tel: 312-731-7219
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130705/05c04128/attachment.html>


More information about the discuss mailing list