[mpich-discuss] MPI I/O vs Sequential I/O

Muhammad Zulfikar Handana zulfikarhandana at gmail.com
Wed Mar 20 12:13:17 CDT 2013


thanks for your explain here.

iam trying wrote into file which have 4.5 MB and 115 MB. i think that large
amount data.
iam using file JPG and MP4 extension.

actually this program about Advanced Encryption Standard (AES), iam gonna
try to using MPI I/O for reducing execute time this program.

oh what do you mean about "that only recovers at much higher levels of
parallelism"? can you explain the details and with example?

thanks a lot for your attention

> 1) when i trying execute my program using MPI-I/O. why when iam execute
> with 1 proses, execute time more good than when iam execute with 2 proses.
> am i wrong? this is my source code.

In general, with one process doing i/o the file system can cache the
heck out of requests.  When a second process comes along, the caching
needs to be disabled or invalidated, causing a "performance crash"
that only recovers at much higher levels of parallelism.

Let's look specifically at your code, though:

  MPI_File_open (MPI_COMM_WORLD, file_akhir, MPI_MODE_CREATE |
       MPI_MODE_WRONLY, MPI_INFO_NULL, &hasil);
  int jumlah1 = (((block-1)*16)+pad);
  int  jumlah = jumlah1 / size;


  MPI_File_set_view(hasil, rank * jumlah * sizeof(char),
      MPI_CHAR, MPI_CHAR, "native", MPI_INFO_NULL);
  MPI_File_write (hasil, output, jumlah1, MPI_CHAR, MPI_STATUS_IGNORE);
  MPI_File_close(&hasil);

- Are you writing a large amount of data or a small amount of data here?
- What file system are you using?


--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA


------------------------------

_______________________________________________
discuss mailing list
discuss at mpich.org
https://lists.mpich.org/mailman/listinfo/discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20130321/8fe450c9/attachment.html>


More information about the discuss mailing list