From zhouh at anl.gov Wed Mar 4 17:45:17 2026 From: zhouh at anl.gov (Zhou, Hui) Date: Wed, 4 Mar 2026 23:45:17 +0000 Subject: [mpich-devel] MPICH development call tomorrow Message-ID: Sorry for the late notice. This week?s MPICH development tel-con is cancelled. There is no new development since last week ?? We?ll resume the call next week. Thanks, -- Hui Zhou Principal Research Software Engineer Mathematics and Computer Science Argonne National Laboratory TEL: 630-252-3430 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhouh at anl.gov Wed Mar 11 17:15:32 2026 From: zhouh at anl.gov (Zhou, Hui) Date: Wed, 11 Mar 2026 22:15:32 +0000 Subject: [mpich-devel] MPICH development call tomorrow Message-ID: Dear MPICH family and friends, The next MPICH development tele-con will be tomorrow (Thursday) at 9am US Central time. agenda * Clean up symbols in ROMIO * Fix ch4 CMP IPC issue * Use template in yaska CUDA kernels * Extend CGA for nonblocking pipelining collectives * Recently merged PRs * Recently opened PRs * New issues To join the Meeting: https://urldefense.us/v3/__https://teams.microsoft.com/l/meetup-join/19*3ameeting_NjU1NDBiZmUtZTM2Ni00M2Y2LTk0YWQtYzNhYTU2NjJkYzE3*40thread.v2/0?context=*7b*22Tid*22*3a*220cfca185-25f7-49e3-8ae7-704d5326e285*22*2c*22Oid*22*3a*22aa62a23c-9ba5-4144-bf95-78e5ef36b6fd*22*7d__;JSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!YNAePevy8aQueuAeZ0H8SdWCCgMn8TnhqjZNIQ4ALMQHjNiTxNpvx3t1AnO3bZDa6sI4qLyLa8s$ Thanks, -- Hui Zhou Principal Research Software Engineer Mathematics and Computer Science Argonne National Laboratory TEL: 630-252-3430 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhouh at anl.gov Wed Mar 18 17:14:30 2026 From: zhouh at anl.gov (Zhou, Hui) Date: Wed, 18 Mar 2026 22:14:30 +0000 Subject: [mpich-devel] MPICH development call tomorrow Message-ID: Dear MPICH family and friends, The next MPICH development tele-con will be tomorrow (Thursday) at 9am US Central time. agenda * Circulant graph algorithm and a pipelining collective scheduler * Recently merged PRs * Recently opened PRs * New issues To join the Meeting: https://urldefense.us/v3/__https://teams.microsoft.com/l/meetup-join/19*3ameeting_NjU1NDBiZmUtZTM2Ni00M2Y2LTk0YWQtYzNhYTU2NjJkYzE3*40thread.v2/0?context=*7b*22Tid*22*3a*220cfca185-25f7-49e3-8ae7-704d5326e285*22*2c*22Oid*22*3a*22aa62a23c-9ba5-4144-bf95-78e5ef36b6fd*22*7d__;JSUlJSUlJSUlJSUlJSUl!!G_uCfscf7eWS!YT65YXN3nJPaJJ_pyHaWhrDR5Ui7GhpayNC1uTnT5HqnpbBdRofKs29GM_iiRjDGAAeBKQVTpGk$ Thanks, -- Hui Zhou Principal Research Software Engineer Mathematics and Computer Science Argonne National Laboratory TEL: 630-252-3430 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhouh at anl.gov Thu Mar 19 09:56:44 2026 From: zhouh at anl.gov (Zhou, Hui) Date: Thu, 19 Mar 2026 14:56:44 +0000 Subject: [mpich-devel] MPICH development call cancelled for the next 2 weeks Message-ID: Dear MPICH family and friends, As I mentioned during today?s call, we are cancelling for the next 2 weeks the MPICH development call due to personal vacations. We?ll resume on 4/9. Enjoy the spring! Thanks, -- Hui Zhou Principal Research Software Engineer Mathematics and Computer Science Argonne National Laboratory TEL: 630-252-3430 -------------- next part -------------- An HTML attachment was scrubbed... URL: From woz at anl.gov Mon Mar 30 14:24:30 2026 From: woz at anl.gov (Wozniak, Justin M.) Date: Mon, 30 Mar 2026 19:24:30 +0000 Subject: [mpich-devel] MPICH with SYCL on Aurora Message-ID: Hi ????I am trying to port a simulation ensemble workflow that runs ExaEpi/AMReX/SYCL to Aurora. The outer workflow uses the system MPI and we use MPICH to run the app with node-local parallelism using a hand-built MPICH. On Aurora, I get errors in early MPI calls that I think are due to SYCL. This approach works on NVIDIA systems like Perlmutter. Is there some simple way to make MPICH aware of SYCL? ????Thanks -- Justin M Wozniak -------------- next part -------------- An HTML attachment was scrubbed... URL: From harms at alcf.anl.gov Mon Mar 30 14:50:06 2026 From: harms at alcf.anl.gov (Harms, Kevin) Date: Mon, 30 Mar 2026 19:50:06 +0000 Subject: [mpich-devel] MPICH with SYCL on Aurora In-Reply-To: References: Message-ID: Justin, can you provide the specific error? kevin ________________________________________ From: Wozniak, Justin M. via devel Sent: Monday, March 30, 2026 2:24 PM To: devel at mpich.org Cc: Wozniak, Justin M. Subject: [mpich-devel] MPICH with SYCL on Aurora Hi ????I am trying to port a simulation ensemble workflow that runs ExaEpi/AMReX/SYCL to Aurora. The outer workflow uses the system MPI and we use MPICH to run the app with node-local parallelism using a hand-built MPICH. On Aurora, I get errors in early MPI calls that I think are due to SYCL. This approach works on NVIDIA systems like Perlmutter. Is there some simple way to make MPICH aware of SYCL? ????Thanks -- Justin M Wozniak From raffenet at anl.gov Tue Mar 31 10:49:32 2026 From: raffenet at anl.gov (Raffenetti, Ken) Date: Tue, 31 Mar 2026 15:49:32 +0000 Subject: [mpich-devel] MPICH 5.0.1rc1 released Message-ID: A new release candidate of MPICH, 5.0.1rc1, is now available for download. This release addresses several user-reported issues. For the full set of commits see https://urldefense.us/v3/__https://github.com/pmodels/mpich/compare/v5.0.0__;!!G_uCfscf7eWS!Yh4ciKVfeX7Nxjr_F7HPgGhkPMK9BPY_qzsZ89EmdEKOKo95eh7IRD47dQLlMQh-f8xPaFD8Vnmzz80$ ?v5.0.1rc1/. You can find the release on our downloads page (https://urldefense.us/v3/__https://www.mpich.org/downloads/__;!!G_uCfscf7eWS!Yh4ciKVfeX7Nxjr_F7HPgGhkPMK9BPY_qzsZ89EmdEKOKo95eh7IRD47dQLlMQh-f8xPaFD8kRC0sOk$ ) Regards, The MPICH team =============================================================================== Changes in 5.0.1 =============================================================================== # Fix bad cast in release-gather collectives that caused data loss issues on Big-Endian 64b arches (s390x) # Fix issue with canceling MPI_ANY_SOURCE receive requests # Fix configuration issue when C++ compiler does not support complex types # Fix function signature issue in Hydra PBS support # Fix crash in MPI_Allreduce with MPI_LOGICAL type # Fix potential crash in multi-nic libfabric initialization # Fix memory leaks in Level Zero and PMIx support # Fix bug in CMA code when GPU support is enabled # Add large count and other necessary aliases to ROMIO to avoid accidental profiling of internal MPI function usage # Add missing error checks in rndv and colletive composition code # Improve autogen.sh error message when autotools are too old -------------- next part -------------- An HTML attachment was scrubbed... URL: From woz at anl.gov Tue Mar 31 11:27:29 2026 From: woz at anl.gov (Wozniak, Justin M.) Date: Tue, 31 Mar 2026 16:27:29 +0000 Subject: [mpich-devel] MPICH with SYCL on Aurora In-Reply-To: References: Message-ID: With MPIR_CVAR_REQUEST_ERR_FATAL=1 in a 2-process run, this looks like: Abort(270742287) on node 0: Fatal error in internal_Waitall: Other MPI error, error stack: internal_Waitall(126)..: MPI_Waitall(count=1, array_of_requests=0x797cdb0, array_of_statuses=0x7ca3fb0) failed MPIR_Waitall(916)......: MPIDI_IPC_rndv_cb(172).: MPIDI_CMA_copy_data(54): copy_iovs(202).........: process_vm_readv failed (errno 14) Abort(270742287) on node 1: Fatal error in internal_Waitall: Other MPI error, error stack: (same) This succeeds for 1-process with SYCL enabled or for 2-process with SYCL disabled in the app at configure time. The app looks like: $ ldd =agent libmpicxx.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpicxx.so.0 (0x0000146726aed000) libmpi.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpi.so.0 (0x0000146725265000) libmkl_sycl_blas.so.5 => /opt/aurora/26.26.0/oneapi/mkl/latest/lib/libmkl_sycl_blas.so.5 (0x0000146721063000) (etc.) libstdc++.so.6 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libstdc++.so.6 (0x0000146708cd9000) libm.so.6 => /lib64/libm.so.6 (0x0000146708b77000) libgcc_s.so.1 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libgcc_s.so.1 (0x0000146708b53000) libsycl.so.8 => /opt/aurora/26.26.0/oneapi/compiler/latest/lib/libsycl.so.8 (0x0000146708758000) libOpenCL.so.1 => /opt/aurora/26.26.0/support/libraries/khronos/default/lib64/libOpenCL.so.1 (0x0000146708743000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000014670871f000) libc.so.6 => /lib64/libc.so.6 (0x000014670852a000) libhwloc.so.15 => /opt/aurora/26.26.0/oneapi/tcm/latest/lib/libhwloc.so.15 (0x00001467082cc000) (etc.) Thanks -- Justin M Wozniak ________________________________ From: Harms, Kevin Sent: Monday, March 30, 2026 14:50 To: devel at mpich.org Cc: Wozniak, Justin M. Subject: Re: MPICH with SYCL on Aurora Justin, can you provide the specific error? kevin ________________________________________ From: Wozniak, Justin M. via devel Sent: Monday, March 30, 2026 2:24 PM To: devel at mpich.org Cc: Wozniak, Justin M. Subject: [mpich-devel] MPICH with SYCL on Aurora Hi ????I am trying to port a simulation ensemble workflow that runs ExaEpi/AMReX/SYCL to Aurora. The outer workflow uses the system MPI and we use MPICH to run the app with node-local parallelism using a hand-built MPICH. On Aurora, I get errors in early MPI calls that I think are due to SYCL. This approach works on NVIDIA systems like Perlmutter. Is there some simple way to make MPICH aware of SYCL? ????Thanks -- Justin M Wozniak -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffenet at anl.gov Tue Mar 31 11:30:46 2026 From: raffenet at anl.gov (Raffenetti, Ken) Date: Tue, 31 Mar 2026 16:30:46 +0000 Subject: [mpich-devel] MPICH with SYCL on Aurora In-Reply-To: References: Message-ID: Which version of MPI is this? This might be a known issue in CMA support (fixed here https://urldefense.us/v3/__https://github.com/pmodels/mpich/pull/7743__;!!G_uCfscf7eWS!d1KDyLESV36iV9-lIorqy-uABJvSThvP3e7-dzplgsxWTa-usfx-Jkc3OmvB1m8w5m0XJVGgotaqnKg$ ). You can try disabling CMA with MPIR_CVAR_CH4_CMA_ENABLE=0 to avoid that path or pull in the fix to your copy and rebuild. Ken From: Wozniak, Justin M. via devel Date: Tuesday, March 31, 2026 at 11:27?AM To: Harms, Kevin , devel at mpich.org Cc: Wozniak, Justin M. Subject: Re: [mpich-devel] MPICH with SYCL on Aurora With MPIR_CVAR_REQUEST_ERR_FATAL=1 in a 2-process run, this looks like: Abort(270742287) on node 0: Fatal error in internal_Waitall: Other MPI error, error stack: internal_Waitall(126)..: MPI_Waitall(count=1, array_of_requests=0x797cdb0, array_of_statuses=0x7ca3fb0) failed MPIR_Waitall(916)......: MPIDI_IPC_rndv_cb(172).: MPIDI_CMA_copy_data(54): copy_iovs(202).........: process_vm_readv failed (errno 14) Abort(270742287) on node 1: Fatal error in internal_Waitall: Other MPI error, error stack: (same) This succeeds for 1-process with SYCL enabled or for 2-process with SYCL disabled in the app at configure time. The app looks like: $ ldd =agent libmpicxx.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpicxx.so.0 (0x0000146726aed000) libmpi.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpi.so.0 (0x0000146725265000) libmkl_sycl_blas.so.5 => /opt/aurora/26.26.0/oneapi/mkl/latest/lib/libmkl_sycl_blas.so.5 (0x0000146721063000) (etc.) libstdc++.so.6 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libstdc++.so.6 (0x0000146708cd9000) libm.so.6 => /lib64/libm.so.6 (0x0000146708b77000) libgcc_s.so.1 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libgcc_s.so.1 (0x0000146708b53000) libsycl.so.8 => /opt/aurora/26.26.0/oneapi/compiler/latest/lib/libsycl.so.8 (0x0000146708758000) libOpenCL.so.1 => /opt/aurora/26.26.0/support/libraries/khronos/default/lib64/libOpenCL.so.1 (0x0000146708743000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000014670871f000) libc.so.6 => /lib64/libc.so.6 (0x000014670852a000) libhwloc.so.15 => /opt/aurora/26.26.0/oneapi/tcm/latest/lib/libhwloc.so.15 (0x00001467082cc000) (etc.) Thanks -- Justin M Wozniak ________________________________ From: Harms, Kevin Sent: Monday, March 30, 2026 14:50 To: devel at mpich.org Cc: Wozniak, Justin M. Subject: Re: MPICH with SYCL on Aurora Justin, can you provide the specific error? kevin ________________________________________ From: Wozniak, Justin M. via devel Sent: Monday, March 30, 2026 2:24 PM To: devel at mpich.org Cc: Wozniak, Justin M. Subject: [mpich-devel] MPICH with SYCL on Aurora Hi ????I am trying to port a simulation ensemble workflow that runs ExaEpi/AMReX/SYCL to Aurora. The outer workflow uses the system MPI and we use MPICH to run the app with node-local parallelism using a hand-built MPICH. On Aurora, I get errors in early MPI calls that I think are due to SYCL. This approach works on NVIDIA systems like Perlmutter. Is there some simple way to make MPICH aware of SYCL? ????Thanks -- Justin M Wozniak -------------- next part -------------- An HTML attachment was scrubbed... URL: From woz at anl.gov Tue Mar 31 12:02:33 2026 From: woz at anl.gov (Wozniak, Justin M.) Date: Tue, 31 Mar 2026 17:02:33 +0000 Subject: [mpich-devel] MPICH with SYCL on Aurora In-Reply-To: References: Message-ID: This is mpich-5.0.0rc3 , I will try that, thanks. -- Justin M Wozniak ________________________________ From: Raffenetti, Ken Sent: Tuesday, March 31, 2026 11:30 To: devel at mpich.org ; Harms, Kevin Cc: Wozniak, Justin M. Subject: Re: MPICH with SYCL on Aurora Which version of MPI is this? This might be a known issue in CMA support (fixed here https://urldefense.us/v3/__https://github.com/pmodels/mpich/pull/7743__;!!G_uCfscf7eWS!cUiPT3Nmpearus7dfptbS5Cz3Q8WaE29of-5eDNCJqQMBj0uuKoeZlwEDkmOTVRQzaomHJtR$ ). You can try disabling CMA with MPIR_CVAR_CH4_CMA_ENABLE=0 to avoid that path or pull in the fix to your copy and rebuild. Ken From: Wozniak, Justin M. via devel Date: Tuesday, March 31, 2026 at 11:27?AM To: Harms, Kevin , devel at mpich.org Cc: Wozniak, Justin M. Subject: Re: [mpich-devel] MPICH with SYCL on Aurora With MPIR_CVAR_REQUEST_ERR_FATAL=1 in a 2-process run, this looks like: Abort(270742287) on node 0: Fatal error in internal_Waitall: Other MPI error, error stack: internal_Waitall(126)..: MPI_Waitall(count=1, array_of_requests=0x797cdb0, array_of_statuses=0x7ca3fb0) failed MPIR_Waitall(916)......: MPIDI_IPC_rndv_cb(172).: MPIDI_CMA_copy_data(54): copy_iovs(202).........: process_vm_readv failed (errno 14) Abort(270742287) on node 1: Fatal error in internal_Waitall: Other MPI error, error stack: (same) This succeeds for 1-process with SYCL enabled or for 2-process with SYCL disabled in the app at configure time. The app looks like: $ ldd =agent libmpicxx.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpicxx.so.0 (0x0000146726aed000) libmpi.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpi.so.0 (0x0000146725265000) libmkl_sycl_blas.so.5 => /opt/aurora/26.26.0/oneapi/mkl/latest/lib/libmkl_sycl_blas.so.5 (0x0000146721063000) (etc.) libstdc++.so.6 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libstdc++.so.6 (0x0000146708cd9000) libm.so.6 => /lib64/libm.so.6 (0x0000146708b77000) libgcc_s.so.1 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libgcc_s.so.1 (0x0000146708b53000) libsycl.so.8 => /opt/aurora/26.26.0/oneapi/compiler/latest/lib/libsycl.so.8 (0x0000146708758000) libOpenCL.so.1 => /opt/aurora/26.26.0/support/libraries/khronos/default/lib64/libOpenCL.so.1 (0x0000146708743000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000014670871f000) libc.so.6 => /lib64/libc.so.6 (0x000014670852a000) libhwloc.so.15 => /opt/aurora/26.26.0/oneapi/tcm/latest/lib/libhwloc.so.15 (0x00001467082cc000) (etc.) Thanks -- Justin M Wozniak ________________________________ From: Harms, Kevin Sent: Monday, March 30, 2026 14:50 To: devel at mpich.org Cc: Wozniak, Justin M. Subject: Re: MPICH with SYCL on Aurora Justin, can you provide the specific error? kevin ________________________________________ From: Wozniak, Justin M. via devel Sent: Monday, March 30, 2026 2:24 PM To: devel at mpich.org Cc: Wozniak, Justin M. Subject: [mpich-devel] MPICH with SYCL on Aurora Hi ????I am trying to port a simulation ensemble workflow that runs ExaEpi/AMReX/SYCL to Aurora. The outer workflow uses the system MPI and we use MPICH to run the app with node-local parallelism using a hand-built MPICH. On Aurora, I get errors in early MPI calls that I think are due to SYCL. This approach works on NVIDIA systems like Perlmutter. Is there some simple way to make MPICH aware of SYCL? ????Thanks -- Justin M Wozniak -------------- next part -------------- An HTML attachment was scrubbed... URL: From woz at anl.gov Tue Mar 31 16:28:06 2026 From: woz at anl.gov (Wozniak, Justin M.) Date: Tue, 31 Mar 2026 21:28:06 +0000 Subject: [mpich-devel] MPICH with SYCL on Aurora In-Reply-To: References: Message-ID: I should have mentioned- I configured with --with-ze=no to avoid this build-time error: CC src/backend/ze/pup/yaksuri_zei_get_ptr_attr.lo OCLOC (spirv) src/backend/ze/pup/yaksuri_zei_pup_char.cl Could not determine device target: skl. Error: Cannot get HW Info for device skl. Invalid device error, trying to fallback to former ocloc libocloc_legacy1.so Couldn't load former ocloc libocloc_legacy1.so Command was: ocloc compile -file src/backend/ze/pup/yaksuri_zei_pup_char.cl -device skl -spv_only -out_dir src/backend/ze/pup -output_no_suffix -q -options "-I ./src/backend/ze/include -cl-std=CL2.0" make[2]: *** [Makefile:13671: src/backend/ze/pup/yaksuri_zei_pup_char.c] Error 223 make[2]: Leaving directory '/tmp/wozniak/mpich-5.0.0rc3-ze/src/mpi/datatype/typerep/yaksa' Maybe this is the main thing I need to address? Do I just need to force the ocloc device for Aurora? -- Justin M Wozniak ________________________________ From: Wozniak, Justin M. via devel Sent: Tuesday, March 31, 2026 12:02 To: Raffenetti, Ken ; devel at mpich.org ; Harms, Kevin Cc: Wozniak, Justin M. Subject: Re: [mpich-devel] MPICH with SYCL on Aurora This is mpich-5.0.0rc3 , I will try that, thanks. -- Justin M Wozniak ________________________________ From: Raffenetti, Ken Sent: Tuesday, March 31, 2026 11:30 To: devel at mpich.org ; Harms, Kevin Cc: Wozniak, Justin M. Subject: Re: MPICH with SYCL on Aurora Which version of MPI is this? This might be a known issue in CMA support (fixed here https://urldefense.us/v3/__https://github.com/pmodels/mpich/pull/7743__;!!G_uCfscf7eWS!aknzGIiBLJskDQAfZ67GQA9PWtMt74mP9JMvaZRJVaStkG9xFqJ4DSbrknR0cr1TssFMMnW8$ ). You can try disabling CMA with MPIR_CVAR_CH4_CMA_ENABLE=0 to avoid that path or pull in the fix to your copy and rebuild. Ken From: Wozniak, Justin M. via devel Date: Tuesday, March 31, 2026 at 11:27?AM To: Harms, Kevin , devel at mpich.org Cc: Wozniak, Justin M. Subject: Re: [mpich-devel] MPICH with SYCL on Aurora With MPIR_CVAR_REQUEST_ERR_FATAL=1 in a 2-process run, this looks like: Abort(270742287) on node 0: Fatal error in internal_Waitall: Other MPI error, error stack: internal_Waitall(126)..: MPI_Waitall(count=1, array_of_requests=0x797cdb0, array_of_statuses=0x7ca3fb0) failed MPIR_Waitall(916)......: MPIDI_IPC_rndv_cb(172).: MPIDI_CMA_copy_data(54): copy_iovs(202).........: process_vm_readv failed (errno 14) Abort(270742287) on node 1: Fatal error in internal_Waitall: Other MPI error, error stack: (same) This succeeds for 1-process with SYCL enabled or for 2-process with SYCL disabled in the app at configure time. The app looks like: $ ldd =agent libmpicxx.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpicxx.so.0 (0x0000146726aed000) libmpi.so.0 => /lus/flare/projects/EpiCalib/sfw/mpich-5.0.0rc3/lib/libmpi.so.0 (0x0000146725265000) libmkl_sycl_blas.so.5 => /opt/aurora/26.26.0/oneapi/mkl/latest/lib/libmkl_sycl_blas.so.5 (0x0000146721063000) (etc.) libstdc++.so.6 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libstdc++.so.6 (0x0000146708cd9000) libm.so.6 => /lib64/libm.so.6 (0x0000146708b77000) libgcc_s.so.1 => /opt/aurora/26.26.0/spack/unified/1.1.1/install/linux-x86_64/gcc-13.4.0-hgnyg4p/lib64/libgcc_s.so.1 (0x0000146708b53000) libsycl.so.8 => /opt/aurora/26.26.0/oneapi/compiler/latest/lib/libsycl.so.8 (0x0000146708758000) libOpenCL.so.1 => /opt/aurora/26.26.0/support/libraries/khronos/default/lib64/libOpenCL.so.1 (0x0000146708743000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000014670871f000) libc.so.6 => /lib64/libc.so.6 (0x000014670852a000) libhwloc.so.15 => /opt/aurora/26.26.0/oneapi/tcm/latest/lib/libhwloc.so.15 (0x00001467082cc000) (etc.) Thanks -- Justin M Wozniak ________________________________ From: Harms, Kevin Sent: Monday, March 30, 2026 14:50 To: devel at mpich.org Cc: Wozniak, Justin M. Subject: Re: MPICH with SYCL on Aurora Justin, can you provide the specific error? kevin ________________________________________ From: Wozniak, Justin M. via devel Sent: Monday, March 30, 2026 2:24 PM To: devel at mpich.org Cc: Wozniak, Justin M. Subject: [mpich-devel] MPICH with SYCL on Aurora Hi ????I am trying to port a simulation ensemble workflow that runs ExaEpi/AMReX/SYCL to Aurora. The outer workflow uses the system MPI and we use MPICH to run the app with node-local parallelism using a hand-built MPICH. On Aurora, I get errors in early MPI calls that I think are due to SYCL. This approach works on NVIDIA systems like Perlmutter. Is there some simple way to make MPICH aware of SYCL? ????Thanks -- Justin M Wozniak -------------- next part -------------- An HTML attachment was scrubbed... URL: