[mpich-discuss] collective read slower than non-collective read

ww4192336 at 126.com
Thu Apr 21 02:46:10 CDT 2016


Hi rob,
I look at the src/mpi/romio/test/coll_perf.c test, and I experimented with the collective and non-collective versions, in single machine collective read is slower than non-collective read, ,my example is behind.
#include <stdio.h> #include <stdlib.h> #include "mpi.h" #define FILE_LENGTH 100000000 #define UNSIGNEDCHAR_PER_BLK 10 int main( int argc, char **argv ) { int myid, numprocs; double startTime,readTime,newReadTime; unsigned char* buf; int bufLen; MPI_Datatype fileType; MPI_File fh; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); bufLen=FILE_LENGTH/numprocs; buf=(unsigned char*) malloc(bufLen); MPI_File_open(MPI_COMM_WORLD,"g://test.txt",MPI_MODE_RDWR,MPI_INFO_NULL,&fh); MPI_Type_vector(bufLen/UNSIGNEDCHAR_PER_BLK,UNSIGNEDCHAR_PER_BLK,numprocs*UNSIGNEDCHAR_PER_BLK,MPI_UNSIGNED_CHAR,&fileType); MPI_Type_commit(&fileType); MPI_File_set_view(fh, myid*UNSIGNEDCHAR_PER_BLK, MPI_UNSIGNED_CHAR, fileType, "native", MPI_INFO_NULL); startTime=MPI_Wtime(); //non-collective read //MPI_File_read(fh,buf,bufLen,MPI_UNSIGNED_CHAR,&status); //collective read MPI_File_read_all(fh,buf,bufLen,MPI_UNSIGNED_CHAR,&status); readTime=MPI_Wtime()-startTime; MPI_Allreduce(&readTime, &newReadTime, 1, MPI_DOUBLE, MPI_MAX,MPI_COMM_WORLD); if (myid == 0) { printf("%d processes read time = %f sec, read bandwidth = %f Mbytes/sec\n",numprocs, newReadTime, FILE_LENGTH/(1024*1024*newReadTime)); } free(buf); MPI_File_close(&fh); MPI_Finalize(); }


the result is
non-collective read
4 processes read time = 1.380783 sec, read bandwidth = 69.067633 Mbytes/sec


collective read
4 processes read time = 2.256306 sec, read bandwidth = 42.267062 Mbytes/sec


I am interested in collective read/write, mybe I should experiment collective I/O in cluster, thank you for your answer.


Best Regard,
Ice.









At 2016-04-20 01:49:08, "Rob Latham" <robl at mcs.anl.gov> wrote:
>
>
>On 04/19/2016 07:45 AM, 冰 wrote:
>> Hi,
>> I run programs in one computer, In my tests here, the MPI_File_read_all
>> have been slower than MPI_File_read, could you send me a simple routine
>> that collective read is faster than non-collective read?
>
>Need more information.
>
>The collective calls provide two important optimizations:  reduce the 
>number of file system clients and increase the typical request size.
>
>On a single processor, though, it's possible the collective overhead 
>outweighs the benefits.
>
>How much slower are you seeing?  how many MPI processes?  what kind of 
>file view and memory type?
>
>You might want to look at the src/mpi/romio/test/coll_perf.c test and 
>experiment with the collective and non-collective versions (you'll have 
>to modify the test or pass in hints).
>
>==rob
>_______________________________________________
>discuss mailing list     discuss at mpich.org
>To manage subscription options or unsubscribe:
>https://lists.mpich.org/mailman/listinfo/discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20160421/0602dd27/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list