[mpich-discuss] discuss Digest, Vol 9, Issue 5
Sufeng Niu
sniu at hawk.iit.edu
Wed Jul 3 14:46:11 CDT 2013
Thanks a lot for your guys reply. I should initial b array, I figure out
the MPI_Win_fence should be called by all processes, which will solve the
assertion failed issue.
it seems that MPI_Win_fence is very similar to MPI_Barrier, Can I say
MPI_Win_fence is the RMA version of MPI_Barrier?
sorry about so many questions. Can I use MPI window as thread level? let's
say I have 4 processes, each process has 8 threads. if only one thread is
used to create mem access window, other process copy it and internal 8
threads use shared memory to perform some operations. Is that ok?
Thanks a lot!
On Wed, Jul 3, 2013 at 1:28 PM, <discuss-request at mpich.org> wrote:
> Send discuss mailing list submissions to
> discuss at mpich.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.mpich.org/mailman/listinfo/discuss
> or, via email, send a message with subject or body 'help' to
> discuss-request at mpich.org
>
> You can reach the person managing the list at
> discuss-owner at mpich.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of discuss digest..."
>
>
> Today's Topics:
>
> 1. beginner for remote mem access (Sufeng Niu)
> 2. Re: beginner for remote mem access (Yi Gu)
> 3. Re: beginner for remote mem access (Yi Gu)
> 4. Re: beginner for remote mem access (Jeff Hammond)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 3 Jul 2013 13:12:46 -0500
> From: Sufeng Niu <sniu at hawk.iit.edu>
> To: discuss at mpich.org
> Subject: [mpich-discuss] beginner for remote mem access
> Message-ID:
> <CAFNNHkwxqXUB_+b8_tZ=
> L5K7VcDVhsz4ZMDDMGF55v-FAhgeqg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi,
>
> I am a beginner and just try to use remote memory access, and I wrote a
> simple program to test it:
>
> #include "mpi.h"
> #include <stdio.h>
> #define SIZE 8
>
> int main(int argc, char *argv[])
> {
> int numtasks, rank, source=0, dest, tag=1, i;
> float a[64] =
> {
> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> 12.0, 13.0, 14.0, 15.0, 16.0,
> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> 12.0, 13.0, 14.0, 15.0, 16.0,
> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> 12.0, 13.0, 14.0, 15.0, 16.0,
> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> 12.0, 13.0, 14.0, 15.0, 16.0,
> };
> float b[SIZE];
>
> MPI_Status stat;
>
> MPI_Init(&argc,&argv);
> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
>
> MPI_Win win;
>
> // check processor rank
> char processor_name[MPI_MAX_PROCESSOR_NAME];
> int name_len;
> MPI_Get_processor_name(processor_name, &name_len);
> printf("-- processor %s, rank %d out of %d processors\n",
> processor_name, rank, numtasks);
>
> MPI_Barrier(MPI_COMM_WORLD);
>
> if (numtasks == 4) {
> if (rank == 0) {
> printf("create window \n");
> MPI_Win_create(a, 8*sizeof(float), sizeof(float),
> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
>
> }
> else {
> MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> }
>
> MPI_Win_fence(0, win);
>
> if (rank == 1){
> MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE, MPI_FLOAT,
> win);
>
> MPI_Win_fence(0, win);
> }
>
> printf("rank= %d b= %3.1f %3.1f %3.1f %3.1f %3.1f %3.1f
> %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> }
> else
> printf("Must specify %d processors. Terminating.\n",SIZE);
>
> MPI_Win_free(&win);
> MPI_Finalize();
> }
>
> However the terminal gives some odd results:
> rank= 0 b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> rank= 2 b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> rank= 3 b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> rank= 1 b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> the rank 1 is correct results, but others should be all zero.
> terminal also give some comments: "Assertion failed in file
> src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >= 0
> internal ABORT - process 0"
>
> another question is if I use remote memory access. all process which does
> not create window for share must add additional line:
> MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL, MPI_COMM_WORLD,
> &win); correct?
>
> Thanks a lot!
>
> --
> Best Regards,
> Sufeng Niu
> ECASP lab, ECE department, Illinois Institute of Technology
> Tel: 312-731-7219
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20130703/91c4ec09/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 3 Jul 2013 13:18:26 -0500
> From: Yi Gu <gyi at mtu.edu>
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] beginner for remote mem access
> Message-ID:
> <CAE5iO_SNJKyC9sG3XbTu4vbS=
> boDrOEV-Y69jUXZR6MQDanUZA at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi, sufeng:
>
> I think you may first initialize array first, since you print out b without
> initialization,
> it could print out anything.
>
> Yi
>
>
> On Wed, Jul 3, 2013 at 1:12 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
>
> > Hi,
> >
> > I am a beginner and just try to use remote memory access, and I wrote a
> > simple program to test it:
> >
> > #include "mpi.h"
> > #include <stdio.h>
> > #define SIZE 8
> >
> > int main(int argc, char *argv[])
> > {
> > int numtasks, rank, source=0, dest, tag=1, i;
> > float a[64] =
> > {
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > };
> > float b[SIZE];
> >
> > MPI_Status stat;
> >
> > MPI_Init(&argc,&argv);
> > MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> > MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
> >
> > MPI_Win win;
> >
> > // check processor rank
> > char processor_name[MPI_MAX_PROCESSOR_NAME];
> > int name_len;
> > MPI_Get_processor_name(processor_name, &name_len);
> > printf("-- processor %s, rank %d out of %d processors\n",
> > processor_name, rank, numtasks);
> >
> > MPI_Barrier(MPI_COMM_WORLD);
> >
> > if (numtasks == 4) {
> > if (rank == 0) {
> > printf("create window \n");
> > MPI_Win_create(a, 8*sizeof(float), sizeof(float),
> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >
> > }
> > else {
> > MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> > }
> >
> > MPI_Win_fence(0, win);
> >
> > if (rank == 1){
> > MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
> MPI_FLOAT,
> > win);
> >
> > MPI_Win_fence(0, win);
> > }
> >
> > printf("rank= %d b= %3.1f %3.1f %3.1f %3.1f %3.1f %3.1f
> > %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> > }
> > else
> > printf("Must specify %d processors.
> Terminating.\n",SIZE);
> >
> > MPI_Win_free(&win);
> > MPI_Finalize();
> > }
> >
> > However the terminal gives some odd results:
> > rank= 0 b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> > rank= 2 b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> > rank= 3 b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> > rank= 1 b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> > the rank 1 is correct results, but others should be all zero.
> > terminal also give some comments: "Assertion failed in file
> > src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >= 0
> > internal ABORT - process 0"
> >
> > another question is if I use remote memory access. all process which does
> > not create window for share must add additional line:
> > MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL, MPI_COMM_WORLD,
> > &win); correct?
> >
> > Thanks a lot!
> >
> > --
> > Best Regards,
> > Sufeng Niu
> > ECASP lab, ECE department, Illinois Institute of Technology
> > Tel: 312-731-7219
> >
> > _______________________________________________
> > discuss mailing list discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20130703/4414c5cd/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Wed, 3 Jul 2013 13:26:29 -0500
> From: Yi Gu <gyi at mtu.edu>
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] beginner for remote mem access
> Message-ID:
> <CAE5iO_Rj7uTaVsEmJ1LBHe+OpTCJjLqpMmvBmjZRd=6WLWC=
> EQ at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi, sufeng:
>
> I am not sure the second problem is, but I think
> MPI_Win_create(MPI_BOTTOM, 0, sizeof(float), MPI_INFO_NULL, MPI_COMM_WORLD,
> &win);
> should be
> MPI_Win_create(MPI_BOTTOM, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win);
>
> For the third question, if they need to access, yes.
>
> Yi
>
>
> On Wed, Jul 3, 2013 at 1:18 PM, Yi Gu <gyi at mtu.edu> wrote:
>
> > Hi, sufeng:
> >
> > I think you may first initialize array first, since you print out b
> > without initialization,
> > it could print out anything.
> >
> > Yi
> >
> >
> > On Wed, Jul 3, 2013 at 1:12 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
> >
> >> Hi,
> >>
> >> I am a beginner and just try to use remote memory access, and I wrote a
> >> simple program to test it:
> >>
> >> #include "mpi.h"
> >> #include <stdio.h>
> >> #define SIZE 8
> >>
> >> int main(int argc, char *argv[])
> >> {
> >> int numtasks, rank, source=0, dest, tag=1, i;
> >> float a[64] =
> >> {
> >> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> >> 12.0, 13.0, 14.0, 15.0, 16.0,
> >> };
> >> float b[SIZE];
> >>
> >> MPI_Status stat;
> >>
> >> MPI_Init(&argc,&argv);
> >> MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >> MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
> >>
> >> MPI_Win win;
> >>
> >> // check processor rank
> >> char processor_name[MPI_MAX_PROCESSOR_NAME];
> >> int name_len;
> >> MPI_Get_processor_name(processor_name, &name_len);
> >> printf("-- processor %s, rank %d out of %d processors\n",
> >> processor_name, rank, numtasks);
> >>
> >> MPI_Barrier(MPI_COMM_WORLD);
> >>
> >> if (numtasks == 4) {
> >> if (rank == 0) {
> >> printf("create window \n");
> >> MPI_Win_create(a, 8*sizeof(float),
> sizeof(float),
> >> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >>
> >> }
> >> else {
> >> MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> >> MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >> }
> >>
> >> MPI_Win_fence(0, win);
> >>
> >> if (rank == 1){
> >> MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
> >> MPI_FLOAT, win);
> >>
> >> MPI_Win_fence(0, win);
> >> }
> >>
> >> printf("rank= %d b= %3.1f %3.1f %3.1f %3.1f %3.1f %3.1f
> >> %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> >> }
> >> else
> >> printf("Must specify %d processors.
> Terminating.\n",SIZE);
> >>
> >> MPI_Win_free(&win);
> >> MPI_Finalize();
> >> }
> >>
> >> However the terminal gives some odd results:
> >> rank= 0 b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> >> rank= 2 b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> >> rank= 3 b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> >> rank= 1 b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> >> the rank 1 is correct results, but others should be all zero.
> >> terminal also give some comments: "Assertion failed in file
> >> src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >= 0
> >> internal ABORT - process 0"
> >>
> >> another question is if I use remote memory access. all process which
> does
> >> not create window for share must add additional line:
> >> MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL, MPI_COMM_WORLD,
> >> &win); correct?
> >>
> >> Thanks a lot!
> >>
> >> --
> >> Best Regards,
> >> Sufeng Niu
> >> ECASP lab, ECE department, Illinois Institute of Technology
> >> Tel: 312-731-7219
> >>
> >> _______________________________________________
> >> discuss mailing list discuss at mpich.org
> >> To manage subscription options or unsubscribe:
> >> https://lists.mpich.org/mailman/listinfo/discuss
> >>
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mpich.org/pipermail/discuss/attachments/20130703/362a0cda/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Wed, 3 Jul 2013 13:28:31 -0500
> From: Jeff Hammond <jeff.science at gmail.com>
> To: discuss at mpich.org
> Subject: Re: [mpich-discuss] beginner for remote mem access
> Message-ID:
> <CAGKz=uLwki+cCY2oL_2vFwkwCi=
> A1Qd9VV1qXHno20gHxqY6iw at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Look at the MPICH test suite for numerous examples of correct RMA programs.
>
> Jeff
>
> On Wed, Jul 3, 2013 at 1:12 PM, Sufeng Niu <sniu at hawk.iit.edu> wrote:
> > Hi,
> >
> > I am a beginner and just try to use remote memory access, and I wrote a
> > simple program to test it:
> >
> > #include "mpi.h"
> > #include <stdio.h>
> > #define SIZE 8
> >
> > int main(int argc, char *argv[])
> > {
> > int numtasks, rank, source=0, dest, tag=1, i;
> > float a[64] =
> > {
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0,
> > 12.0, 13.0, 14.0, 15.0, 16.0,
> > };
> > float b[SIZE];
> >
> > MPI_Status stat;
> >
> > MPI_Init(&argc,&argv);
> > MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> > MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
> >
> > MPI_Win win;
> >
> > // check processor rank
> > char processor_name[MPI_MAX_PROCESSOR_NAME];
> > int name_len;
> > MPI_Get_processor_name(processor_name, &name_len);
> > printf("-- processor %s, rank %d out of %d processors\n",
> > processor_name, rank, numtasks);
> >
> > MPI_Barrier(MPI_COMM_WORLD);
> >
> > if (numtasks == 4) {
> > if (rank == 0) {
> > printf("create window \n");
> > MPI_Win_create(a, 8*sizeof(float), sizeof(float),
> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> >
> > }
> > else {
> > MPI_Win_create(MPI_BOTTOM, 0, sizeof(float),
> > MPI_INFO_NULL, MPI_COMM_WORLD, &win);
> > }
> >
> > MPI_Win_fence(0, win);
> >
> > if (rank == 1){
> > MPI_Get(b, SIZE, MPI_FLOAT, 0, 8, SIZE,
> MPI_FLOAT,
> > win);
> >
> > MPI_Win_fence(0, win);
> > }
> >
> > printf("rank= %d b= %3.1f %3.1f %3.1f %3.1f %3.1f %3.1f
> > %3.1f %3.1f\n", rank,b[0],b[1],b[2],b[3],b[4],b[5],b[6],b[7]);
> > }
> > else
> > printf("Must specify %d processors.
> Terminating.\n",SIZE);
> >
> > MPI_Win_free(&win);
> > MPI_Finalize();
> > }
> >
> > However the terminal gives some odd results:
> > rank= 0 b= 0.0 0.0 0.0 0.0 0.0 0.0 -71847793475452928.0 0.0
> > rank= 2 b= 0.0 0.0 0.0 0.0 0.0 0.0 222086852849451401216.0 0.0
> > rank= 3 b= 0.0 0.0 0.0 0.0 0.0 0.0 -74882.4 0.0
> > rank= 1 b= 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0
> > the rank 1 is correct results, but others should be all zero.
> > terminal also give some comments: "Assertion failed in file
> > src/mpid/ch3/src/ch3u_rma_sync.c at line 5061: win_ptr->my_counter >= 0
> > internal ABORT - process 0"
> >
> > another question is if I use remote memory access. all process which does
> > not create window for share must add additional line:
> > MPI_Win_create(MPI_BOTTOM, 0, data_type, MPI_INFO_NULL, MPI_COMM_WORLD,
> > &win); correct?
> >
> > Thanks a lot!
> >
> > --
> > Best Regards,
> > Sufeng Niu
> > ECASP lab, ECE department, Illinois Institute of Technology
> > Tel: 312-731-7219
> >
> > _______________________________________________
> > discuss mailing list discuss at mpich.org
> > To manage subscription options or unsubscribe:
> > https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> --
> Jeff Hammond
> jeff.science at gmail.com
>
>
> ------------------------------
>
> _______________________________________________
> discuss mailing list
> discuss at mpich.org
> https://lists.mpich.org/mailman/listinfo/discuss
>
> End of discuss Digest, Vol 9, Issue 5
> *************************************
>
--
Best Regards,
Sufeng Niu
ECASP lab, ECE department, Illinois Institute of Technology
Tel: 312-731-7219
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130703/f8e7a601/attachment.html>
More information about the discuss
mailing list