[mpich-discuss] (no subject)
Chafik sanaa
san.chafik at gmail.com
Fri Dec 5 14:29:08 CST 2014
hi, thanks
2014-12-05 19:37 GMT+01:00 "Antonio J. Peña" <apenya at mcs.anl.gov>:
>
> Dear user,
>
> Please note that this mailing list is exclusively intended to discuss
> potential issues with the MPICH MPI implementation. Since your question is
> about general MPI usage, we cannot provide you with that assistance here.
> You will surely find assistance in this kind of topics in general
> programming forums such as stackoverflow, where some of our team members
> contribute as well. Also, web search engines are likely to be a good
> resource in your case.
>
> Best regards,
> Antonio
>
>
>
> On 12/05/2014 11:50 AM, Chafik sanaa wrote:
>
> I have a table "tab[]" with 10 elements "0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8
> | 9 |" I am sending with the Scatterv function; 4 elements to the process 0
> "|0 | 1 | 2 | 3 | 4| "and 6 elements to process 1 "| 5 | 6 | 7 | 8 | 9 | ",
> after I did the summation of the value" add " to each element of the
> received table which gives me a new table "afteradd[] "for each process, I
> want this time collecting the both tables "afteradd[]" and send them to the
> process 0, after a little research I found that there's a GATHERV function
> which do that but I do not know how ?
>
> program:
> #include <malloc.h>
> #include <stdlib.h>
> #include <stdio.h>
> #include <time.h>
> #include "math.h"
> #include "mpi.h"
>
> int main(int argc, char** argv)
> {
> int taskid, ntasks;
> int ierr, i, itask;
> int sendcounts[2048], displs[2048], recvcount;
> double **sendbuff, *recvbuff,*afteradd, buffsum, buffsums[2048];
> double inittime, totaltime;
> const int nbr_etat = 10;
> double tab[nbr_etat];
> for (int i = 0; i < nbr_etat; i++)
> tab[i] = i;
> int nbr_elm[2] = { 4, 6 };
> int dpl[2] = { 0, 4 };
> MPI_Init(&argc, &argv);
> MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
> MPI_Comm_size(MPI_COMM_WORLD, &ntasks);
> recvbuff = (double *)malloc(sizeof(double)*nbr_etat);
> if (taskid == 0)
> {
> sendbuff = (double **)malloc(sizeof(double *)*ntasks);// run for 2 proc
> sendbuff[0] = (double *)malloc(sizeof(double)*ntasks*nbr_etat);
> for (i = 1; i < ntasks; i++)
> {
> sendbuff[i] = sendbuff[i - 1] + nbr_etat;
> }
> }
> else
> {
> sendbuff = (double **)malloc(sizeof(double *)* 1);
> sendbuff[0] = (double *)malloc(sizeof(double)* 1);
> }
>
> if (taskid == 0){
>
> srand((unsigned)time(NULL) + taskid);
> for (itask = 0; itask<ntasks; itask++)
> {
> int k;
> displs[itask] = itask*nbr_etat;
> int s = dpl[itask];
> sendcounts[itask] = nbr_elm[itask];
>
>
> for (i = 0; i<sendcounts[itask]; i++)
> {
> k = i + s;
> sendbuff[itask][i] = tab[k];
> printf("+ %0.0f ", sendbuff[itask][i]);
> }
> printf("\n");
> }
> }
>
> recvcount = nbr_elm[taskid];
>
> inittime = MPI_Wtime();
>
> ierr = MPI_Scatterv(sendbuff[0], sendcounts, displs, MPI_DOUBLE,
> recvbuff, recvcount, MPI_DOUBLE,
> 0, MPI_COMM_WORLD);
>
> totaltime = MPI_Wtime() - inittime;
>
> buffsum = 0.0;
> int add = 3;
> afteradd = (double *)malloc(sizeof(double)*nbr_etat);
> for (i = 0; i<recvcount; i++)
> {
> afteradd [i]= add + recvbuff[i];
> }
> for (i = 0; i<recvcount; i++)
> {
> printf("* %0.0f ", afteradd[i]);
> }
> printf("\n");
> if (taskid == 0)
> {
> free(sendbuff[0]);
> free(sendbuff);
> }
> else
> {
> free(sendbuff[0]);
> free(sendbuff);
> free(recvbuff);
> }
> MPI_Finalize();
>
> }
>
>
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:https://lists.mpich.org/mailman/listinfo/discuss
>
>
>
> --
> Antonio J. Peña
> Postdoctoral Appointee
> Mathematics and Computer Science Division
> Argonne National Laboratory
> 9700 South Cass Avenue, Bldg. 240, Of. 3148
> Argonne, IL 60439-4847apenya at mcs.anl.govwww.mcs.anl.gov/~apenya
>
>
> _______________________________________________
> discuss mailing list discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20141205/d370557c/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss
More information about the discuss
mailing list