[mpich-discuss] Error: "MPIR_Get_contextid_sparse_group(1193): Too many communicators (0/2048)"

Ted Sariyski tsariysk at craft-tech.com
Tue May 27 07:55:55 CDT 2014


Pavan and Rajeev,
Thanks for all your kind help.
Best regards,
--Ted



On 05/23/2014 01:29 PM, Rajeev Thakur wrote:
> I don't fully understand your use case, but:
>
> The window memory passed to MPI_Win_create on each process must be a contiguous memory. You can internally manage it in whatever way.
>
> If you create a datatype on a process, the datatype is local to that process. Other processes can't use that datatype.
>
> Also the MPI standard says the following about the target datatype passed to put/get/accumulate (MPI-3, pg 420, line 33):
> "The target datatype must contain only relative displacements, not absolute addresses."
>
> Rajeev
>
>
> On May 23, 2014, at 9:44 AM, Ted Sariyski <tsariysk at craft-tech.com> wrote:
>
>> I need a little bit more help. My question relates to the structure of the array (referred below as A), for which a window object will be created.
>>
>> Let say that particles are of type particle_t:
>>
>> type particle_t
>>        integer :: host, id,...
>>        real,pointer :: data(:)=>null()
>> end type particle_t
>>
>> I built two corresponding mpi types, one with data allocated, mpi_particles_t, and mpi_particles_null_t, when p%data=>null().
>>
>> What I want to do is the following:
>> 	•  All processes allocates an array of type particles_t for all particles, but data is allocated only if a particle is assigned to  the domain:
>>              allocate(A(tot_num_particles))
>>                 do n=1,tot_num_particles
>>                     if ( particle(n)%host == myid)    allocate(particle(n)%data(data_size))
>>                  enddo
>>
>>      2. All processes create their mpi data type mpi_A_t:
>>
>>        do n=1,tot_num_particles
>>
>>              if ( particle(n)%host == myid)  then
>>                    blockcounts(n) = 1 ; oldtypes(n) = mpi_particles_t
>>                else
>>                    blockcounts(n) = 1 ; oldtypes(n) = mpi_particles_null_t
>>             endif
>>              call MPI_GET_ADDRESS(A(n), offsets(n), ierr)
>>          enddo
>>
>>         call mpi_type_create_struct(tot_num_particles,blockcounts,offsets,oldtypes,mpi_A_myid_t,ierr)
>>          call mpi_type_commit(mpi_A_myid_t,ierr)
>>          
>> I gave a try but it fails. Before I proceed I want to make sure that there is nothing fundamentally wrong with this approach. Besides, even if it is correct, I am not sure that this is the 'best' solution. I will highly appreciate your comments.
>>
>> Thanks in advance,
>> --Ted
>>
>>
>>
>>
>>
>> On 05/22/2014 03:31 PM, Ted Sariyski wrote:
>>> I see.  Thanks a lot.
>>> --Ted
>>>
>>>
>>> On 05/22/2014 03:15 PM, Rajeev Thakur wrote:
>>>>> What do you mean with: "Why can’t all processes open one large window?" I guess I miss something.
>>>> If each process has an array of objects that belong to it (called array A say) , then with a single call to MPI_Win_create you can create a window object that has everyone's A arrays in it.
>>>>
>>>> Rajeev
>>>>
>>>>
>>>>
>>>> On May 22, 2014, at 9:46 AM, Ted Sariyski <tsariysk at craft-tech.com> wrote:
>>>>
>>>>> It is about MPI_win. Here is the problem as it relates to MPI (it is a Boltzmann type equation).
>>>>>
>>>>> There are N particles interacting with each other. Interaction is directional, so that a particle interacts only with those particles, which are within a narrow cone. The first step is to segregate the initial set of N particles into subsets of particles  (I called it 'objects'),  which interact with each other. Here is what I do:
>>>>>      • Assign each object to a process.
>>>>>      • The process which owns a object:
>>>>>          • Makes a guess for the maximum number of particles expected in this object.
>>>>>          • Allocates memory for it.
>>>>>          • Opens a shared window.
>>>>>      • All processes
>>>>>          • Each particle identify which object it belongs to, and PUTs its data there.
>>>>>      • After assembly is done, objects are passed to a solver.
>>>>>      • Repeat
>>>>> What do you mean with: "Why can’t all processes open one large window?" I guess I miss something.
>>>>> Thanks,
>>>>> --Ted
>>>>>
>>>>>
>>>>> On 05/21/2014 11:03 PM, Balaji, Pavan wrote:
>>>>>> On May 21, 2014, at 6:02 PM, Ted Sariyski <tsariysk at craft-tech.com>
>>>>>>    wrote:
>>>>>>
>>>>>>> Memory limitations. With one large window all processes have to allocate memory for the objects they own as well as for objects assigned to other process.
>>>>>>>
>>>>>> Are we talking about the same thing here?  I’m referring to an MPI_Win.  What objects do processes need to keep track of?
>>>>>>
>>>>>>     — Pavan
>>>>>>
>>>>>> _______________________________________________
>>>>>> discuss mailing list
>>>>>> discuss at mpich.org
>>>>>>
>>>>>> To manage subscription options or unsubscribe:
>>>>>>
>>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>>> _______________________________________________
>>>>> discuss mailing list     discuss at mpich.org
>>>>> To manage subscription options or unsubscribe:
>>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>> _______________________________________________
>>>> discuss mailing list     discuss at mpich.org
>>>> To manage subscription options or unsubscribe:
>>>> https://lists.mpich.org/mailman/listinfo/discuss
>>>
>> _______________________________________________
>> discuss mailing list     discuss at mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
> _______________________________________________
> discuss mailing list     discuss at mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss





More information about the discuss mailing list