<div dir="ltr">Hello,<div><br></div><div>I am trying to integrate an asynchronous API in ROMIO but I am experiencing some problems, probably caused by the improper use of the MPIX_Grequest interface.</div><div><br></div><div>Short description of the integrated functions:</div><div>I have a ROMIO implementation with extended MPI-IO hints that can be used to divert written data, during collective I/O, to a local file system (typically running on a SSD). The full description of the modification can be found at this link: <a href="https://github.com/gcongiu/E10/tree/beegfs-devel">https://github.com/gcongiu/E10/tree/beegfs-devel</a></div><div><br></div><div>The BeeGFS ROMIO driver uses the cache API internally provided by BeeGFS to move data around, while the custom ROMIO implementation provides support for any other file system (in the "common" layer).</div><div><br></div><div>Description of the problem:</div><div>I am currently having problems making the BeeGFS driver work properly. More specifically I am noticing that when the size of the shared file is large (32GB in my configuration) and I am writing multiple files one after the other (using a custom coll_perf benchmark) the i-th file (for i > 0 with 0 <= i < 4) gets stuck.</div><div><br></div><div>BeeGFS provides a deeper_cache_flush_range() function to flush data in the cache to the global file. This is non blocking and will submit a range (offset, length) for a certain filename to a cache deamon that transfers the data. Multiple ranges can be submitted one after the other. </div><div><br></div><div>Completion can be checked using deeper_cache_flush_wait() using the filename as input. Since the MPI_Grequest model requires the external thread to make progress by invoking MPI_Grequest_complete() I have opted for the MPIX_Grequest interface which allows me to make progress (MPI_Grequest_complete) inside MPI while invoking the MPI_Wait() function.</div><div><br></div><div>deeper_cache_flush_range() does not require the application to keep any handler for the submitted request. The only thing the application needs is the name of the file to pass to deeper_cache_flush_wait().</div><div><br></div><div>Follows a code snippet for the corresponding non blocking implementation in ROMIO:</div><div><br></div><div>/* arguments for callback function */</div><div>struct callback {</div><div> ADIO_File fd_;</div><div> ADIOI_Sync_req_t req_;</div><div>};</div><div><br></div><div>/*</div><div> * ADIOI_BEEGFS_Sync_thread_start - start synchronisation of req</div><div> */</div><div>int ADIOI_BEEGFS_Sync_thread_start(ADIOI_Sync_thread_t t) {</div><div> ADIOI_Atomic_queue_t q = t->sub_;</div><div> ADIOI_Sync_req_t r;</div><div> int retval, count, fflags, error_code, i;</div><div> ADIO_Offset offset, len;</div><div> MPI_Count datatype_size;</div><div> MPI_Datatype datatype;</div><div> ADIO_Request *req;</div><div> char myname[] = "ADIOI_BEEGFS_SYNC_THREAD_START";</div><div><br></div><div> r = ADIOI_Atomic_queue_front(q);</div><div> ADIOI_Atomic_queue_pop(q);</div><div><br></div><div> ADIOI_Sync_req_get_key(r, ADIOI_SYNC_ALL, &offset,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> &datatype, &count, &req, &error_code, &fflags);</div><div><br></div><div> MPI_Type_size_x(datatype, &datatype_size);</div><div> len = (ADIO_Offset)datatype_size * (ADIO_Offset)count;</div><div><br></div><div> retval = deeper_cache_flush_range(t->fd_->filename, (off_t)offset, (size_t)len, fflags);</div><div><br></div><div> if (retval == DEEPER_RETVAL_SUCCESS && ADIOI_BEEGFS_greq_class == 1) {</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span>MPIX_Grequest_class_create(ADIOI_BEEGFS_Sync_req_query,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> ADIOI_BEEGFS_Sync_req_free,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> MPIU_Greq_cancel_fn,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> ADIOI_BEEGFS_Sync_req_poll,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> ADIOI_BEEGFS_Sync_req_wait,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> &ADIOI_BEEGFS_greq_class);</div><div> } else {</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span>/* --BEGIN ERROR HANDLING-- */</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span>return MPIO_Err_create_code(MPI_SUCCESS,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> MPIR_ERR_RECOVERABLE,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> "ADIOI_BEEGFS_Cache_sync_req",</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> __LINE__, MPI_ERR_IO, "**io %s",</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> strerror(errno));</div><div> /* --END ERROR HANDLING-- */</div><div> }</div><div><br></div><div> /* init args for the callback functions */</div><div> struct callback *args = (struct callback *)ADIOI_Malloc(sizeof(struct callback));</div><div> args->fd_ = t->fd_;</div><div> args->req_ = r;</div><div><br></div><div> MPIX_Grequest_class_allocate(ADIOI_BEEGFS_greq_class, args, req);</div><div><br></div><div> return MPI_SUCCESS;</div><div>}</div><div><br></div><div>/*</div><div> * ADIOI_BEEGFS_Sync_req_poll -</div><div> */</div><div>int ADIOI_BEEGFS_Sync_req_poll(void *extra_state, MPI_Status *status) {</div><div> struct callback *cb = (struct callback *)extra_state;</div><div> ADIOI_Sync_req_t r = (ADIOI_Sync_req_t)cb->req_;</div><div> ADIO_File fd = (ADIO_File)cb->fd_;</div><div> char *filename = fd->filename;</div><div> int count, cache_flush_flags, error_code;</div><div> MPI_Datatype datatype;</div><div> ADIO_Offset offset;</div><div> MPI_Aint lb, extent;</div><div> ADIO_Offset len;</div><div> ADIO_Request *req;</div><div><br></div><div> ADIOI_Sync_req_get_key(r, ADIOI_SYNC_ALL, &offset,</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span> &datatype, &count, &req, &error_code, &cache_flush_flags);</div><div><br></div><div> int retval = deeper_cache_flush_wait(filename, cache_flush_flags);</div><div><br></div><div> MPI_Type_get_extent(datatype, &lb, &extent);</div><div> len = (ADIO_Offset)extent * (ADIO_Offset)count;</div><div><br></div><div> if (fd->hints->e10_cache_coherent == ADIOI_HINT_ENABLE)</div><div><span class="gmail-Apple-tab-span" style="white-space:pre"> </span>ADIOI_UNLOCK(fd, offset, SEEK_SET, len);</div><div><br></div><div> /* mark generilized request as completed */</div><div> MPI_Grequest_complete(*req);</div><div><br></div><div> if (retval != DEEPER_RETVAL_SUCCESS)</div><div> goto fn_exit_error;</div><div><br></div><div> MPI_Status_set_cancelled(status, 0);</div><div> MPI_Status_set_elements(status, datatype, count);</div><div> status->MPI_SOURCE = MPI_UNDEFINED;</div><div> status->MPI_TAG = MPI_UNDEFINED;</div><div><br></div><div> ADIOI_Free(cb);</div><div><br></div><div> return MPI_SUCCESS;</div><div><br></div><div>fn_exit_error:</div><div> ADIOI_Free(cb);</div><div><br></div><div> return MPIO_Err_create_code(MPI_SUCCESS,</div><div> MPIR_ERR_RECOVERABLE,</div><div> "ADIOI_BEEGFS_Sync_req_poll",</div><div> __LINE__, MPI_ERR_IO, "**io %s",</div><div> strerror(errno));</div><div>}</div><div><br></div><div>/*</div><div> * ADIOI_BEEGFS_Sync_req_wait -</div><div> */</div><div>int ADIOI_BEEGFS_Sync_req_wait(int count, void **array_of_states, double timeout, MPI_Status *status) {</div><div> return ADIOI_BEEGFS_Sync_req_poll(*array_of_states, status);</div><div>}</div><div><br></div><div>/*</div><div> * ADIOI_BEEGFS_Sync_req_query -</div><div> */</div><div>int ADIOI_BEEGFS_Sync_req_query(void *extra_state, MPI_Status *status) {</div><div> return MPI_SUCCESS;</div><div>}</div><div><br></div><div>/*</div><div> * ADIOI_BEEGFS_Sync_req_free -</div><div> */</div><div>int ADIOI_BEEGFS_Sync_req_free(void *extra_state) {</div><div> return MPI_SUCCESS;</div><div>}</div><div><br></div><div>/*</div><div> * ADIOI_BEEGFS_Sync_req_cancel -</div><div> */</div><div>int ADIOI_BEEGFS_Sync_req_cancel(void *extra_state, int complete) {</div><div> return MPI_SUCCESS;</div><div>} <br clear="all"><div><br></div><div>I have had a look at the implementation for MPI_File_iwrite() in ad_iwrite.c but this uses POSIX AIO, thus I am not sure I am doing things properly here.</div><div><br></div><div>Additionally the problem does not show for small files (e.g. 1GB), which makes it not easy to debug.</div><div><br></div><div>BTW, I am using weak scaling thus every process always writes the same amount of data (64MB), I just change the number of procs in the test.<br></div><div><br></div><div>To test the modification with coll_perf I am using a configuration with 512 procs (64 nodes, 8procs/node) and 16 procs (8 nodes, 2procs/node). </div><div><br></div><div>The 16 procs configuration writes 4 files of 1GB (16 x 64MB), the 512 procs configuration writes 4 files of 32GB (512 x 64MB).</div><div><br></div><div>Can someone spot any problem in my code?</div><div><br></div><div>Thanks,</div><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr">Giuseppe Congiu <strong>·</strong> Research Engineer II<br>
Seagate Technology, LLC<br>
office: +44 (0)23 9249 6082 <strong>·</strong> mobile: <br>
<a href="http://www.seagate.com" target="_blank">www.seagate.com</a><br></div></div>
</div></div>