[mpich-discuss] Thank you for your help

Dmitriy Lyubimov dlieu.7 at gmail.com
Wed Apr 20 12:00:28 CDT 2016


Also.

Where can i find reference for PMI_xx standard?

Can i launch a worker process defined in an .so which i can load in an
already running process i can share memory with? In other words, is there
example how i can do the same thing in a .so that the hydra_rmi_proxy does?

thank you.
-d

On Tue, Apr 19, 2016 at 9:21 AM, Dmitriy Lyubimov <dlieu.7 at gmail.com> wrote:

> Hello,
>
>  I am new to MPI. I have a couple of questions I hope someone could help
> me with -- i tried to scan archives but have not immediately found the
> answers.
>
> (1) I know there is a multicast based implementation for infiniband in a
> derivative MVAPICH peojwxr from Ohio state that apparently allows to avoid
> packet duplication for multiple addressees when implementing broadcasts and
> scatters, in a reliable communication manner even for larger messages (as
> large as perhaps 16...200Mb)
>
> Is there any similar technique used in MPICH for broadcasts and gathers?
> Would there be any networking benefit in using MPICH as opposed to
> carefully organized point-to-point broadcast scheme such as perhaps chained
> peer-to-peer or butterfly mixing?
>
> (2) Another question is possibility of integration of the worker processes
> with another resource manager (other than Hydra, i guess) -- perhaps yarn
> or mesos. Can i bring the worker process first somehow and then enroll it
> into a communicator, assuming at certain point i know all process locations
> in the network?
>
> In other words, can i replace whatever mpirun or mpiexec does, with a
> custom code that would enroll all workers into the communicators and
> assigns the ranks?
>
> (3) network multinenancy: can i set up several world communicators on the
> same network independently at different point of times? or it has to be
> only one world communicator at a time?
>
> Thank you very much for your help.
> -Dmitriy
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mpich.org/pipermail/discuss/attachments/20160420/7c44ce32/attachment.html>
-------------- next part --------------
_______________________________________________
discuss mailing list     discuss at mpich.org
To manage subscription options or unsubscribe:
https://lists.mpich.org/mailman/listinfo/discuss


More information about the discuss mailing list