libMesh::MeshCommunication Class Reference

#include <mesh_communication.h>

Public Member Functions

 MeshCommunication ()
 
 ~MeshCommunication ()
 
void clear ()
 
void broadcast (MeshBase &) const
 
void redistribute (DistributedMesh &mesh, bool newly_coarsened_only=false) const
 
void gather_neighboring_elements (DistributedMesh &) const
 
void send_coarse_ghosts (MeshBase &) const
 
void gather (const processor_id_type root_id, DistributedMesh &) const
 
void allgather (DistributedMesh &mesh) const
 
void delete_remote_elements (DistributedMesh &, const std::set< Elem *> &) const
 
void assign_global_indices (MeshBase &) const
 
void check_for_duplicate_global_indices (MeshBase &) const
 
template<typename ForwardIterator >
void find_local_indices (const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::unordered_map< dof_id_type, dof_id_type > &) const
 
template<typename ForwardIterator >
void find_global_indices (const Parallel::Communicator &communicator, const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
 
void make_elems_parallel_consistent (MeshBase &)
 
void make_p_levels_parallel_consistent (MeshBase &)
 
void make_node_ids_parallel_consistent (MeshBase &)
 
void make_node_unique_ids_parallel_consistent (MeshBase &)
 
void make_node_proc_ids_parallel_consistent (MeshBase &)
 
void make_new_node_proc_ids_parallel_consistent (MeshBase &)
 
void make_nodes_parallel_consistent (MeshBase &)
 
void make_new_nodes_parallel_consistent (MeshBase &)
 

Detailed Description

This is the MeshCommunication class. It handles all the details of communicating mesh information from one processor to another. All parallelization of the Mesh data structures is done via this class.

Author
Benjamin S. Kirk
Date
2003

Definition at line 50 of file mesh_communication.h.

Constructor & Destructor Documentation

◆ MeshCommunication()

libMesh::MeshCommunication::MeshCommunication ( )
inline

Constructor.

Definition at line 57 of file mesh_communication.h.

57 {}

◆ ~MeshCommunication()

libMesh::MeshCommunication::~MeshCommunication ( )
inline

Destructor.

Definition at line 62 of file mesh_communication.h.

62 {}

Member Function Documentation

◆ allgather()

void libMesh::MeshCommunication::allgather ( DistributedMesh mesh) const
inline

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor then sends its local nodes and elements to the other processors. The end result is that a previously distributed DistributedMesh will be serialized on each processor. Since this method is collective it must be called by all processors.

Definition at line 135 of file mesh_communication.h.

References gather(), libMesh::DofObject::invalid_processor_id, and mesh.

Referenced by libMesh::DistributedMesh::allgather().

MeshBase & mesh
static const processor_id_type invalid_processor_id
Definition: dof_object.h:358
void gather(const processor_id_type root_id, DistributedMesh &) const

◆ assign_global_indices()

void libMesh::MeshCommunication::assign_global_indices ( MeshBase mesh) const

This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh. The approach is to compute the Hilbert space-filling curve key and use its value to assign an index in [0,N_global). Since the Hilbert key is unique for each spatial location, two objects occupying the same location will be assigned the same global id. Thus, this method can also be useful for identifying duplicate nodes which may occur during parallel refinement.

Definition at line 168 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), libMesh::ParallelObject::comm(), libMesh::MeshTools::create_nodal_bounding_box(), libMesh::MeshBase::element_ptr_range(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), mesh, libMesh::MeshBase::n_elem(), libMesh::MeshBase::n_nodes(), libMesh::MeshBase::node_ptr_range(), libMesh::Threads::parallel_for(), libMesh::Parallel::pull_parallel_vector_data(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::MeshTools::Private::globally_renumber_nodes_and_elements().

169 {
170  LOG_SCOPE ("assign_global_indices()", "MeshCommunication");
171 
172  // This method determines partition-agnostic global indices
173  // for nodes and elements.
174 
175  // Algorithm:
176  // (1) compute the Hilbert key for each local node/element
177  // (2) perform a parallel sort of the Hilbert key
178  // (3) get the min/max value on each processor
179  // (4) determine the position in the global ranking for
180  // each local object
181 
183 
184  // Global bounding box. We choose the nodal bounding box for
185  // backwards compatibility; the element bounding box may be looser
186  // on curved elements.
187  BoundingBox bbox =
189 
190  //-------------------------------------------------------------
191  // (1) compute Hilbert keys
192  std::vector<Parallel::DofObjectKey>
193  node_keys, elem_keys;
194 
195  {
196  // Nodes first
197  {
200  node_keys.resize (nr.size());
201  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
202 
203  // // It's O(N^2) to check that these keys don't duplicate before the
204  // // sort...
205  // MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
206  // for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
207  // {
208  // MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
209  // for (std::size_t j = 0; j != i; ++j, ++nodej)
210  // {
211  // if (node_keys[i] == node_keys[j])
212  // {
213  // CFixBitVec icoords[3], jcoords[3];
214  // get_hilbert_coords(**nodej, bbox, jcoords);
215  // libMesh::err <<
216  // "node " << (*nodej)->id() << ", " <<
217  // *(Point *)(*nodej) << " has HilbertIndices " <<
218  // node_keys[j] << std::endl;
219  // get_hilbert_coords(**nodei, bbox, icoords);
220  // libMesh::err <<
221  // "node " << (*nodei)->id() << ", " <<
222  // *(Point *)(*nodei) << " has HilbertIndices " <<
223  // node_keys[i] << std::endl;
224  // libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
225  // }
226  // }
227  // }
228  }
229 
230  // Elements next
231  {
234  elem_keys.resize (er.size());
235  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
236 
237  // // For elements, the keys can be (and in the case of TRI, are
238  // // expected to be) duplicates, but only if the elements are at
239  // // different levels
240  // MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
241  // for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
242  // {
243  // MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
244  // for (std::size_t j = 0; j != i; ++j, ++elemj)
245  // {
246  // if ((elem_keys[i] == elem_keys[j]) &&
247  // ((*elemi)->level() == (*elemj)->level()))
248  // {
249  // libMesh::err <<
250  // "level " << (*elemj)->level() << " elem\n" <<
251  // (**elemj) << " centroid " <<
252  // (*elemj)->centroid() << " has HilbertIndices " <<
253  // elem_keys[j] << " or " <<
254  // get_dofobject_key((*elemj), bbox) <<
255  // std::endl;
256  // libMesh::err <<
257  // "level " << (*elemi)->level() << " elem\n" <<
258  // (**elemi) << " centroid " <<
259  // (*elemi)->centroid() << " has HilbertIndices " <<
260  // elem_keys[i] << " or " <<
261  // get_dofobject_key((*elemi), bbox) <<
262  // std::endl;
263  // libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
264  // }
265  // }
266  // }
267  }
268  } // done computing Hilbert keys
269 
270 
271 
272  //-------------------------------------------------------------
273  // (2) parallel sort the Hilbert keys
275  node_keys);
276  node_sorter.sort(); /* done with node_keys */ //node_keys.clear();
277 
278  const std::vector<Parallel::DofObjectKey> & my_node_bin =
279  node_sorter.bin();
280 
282  elem_keys);
283  elem_sorter.sort(); /* done with elem_keys */ //elem_keys.clear();
284 
285  const std::vector<Parallel::DofObjectKey> & my_elem_bin =
286  elem_sorter.bin();
287 
288 
289 
290  //-------------------------------------------------------------
291  // (3) get the max value on each processor
292  std::vector<Parallel::DofObjectKey>
293  node_upper_bounds(communicator.size()),
294  elem_upper_bounds(communicator.size());
295 
296  { // limit scope of temporaries
297  std::vector<Parallel::DofObjectKey> recvbuf(2*communicator.size());
298  std::vector<unsigned short int> /* do not use a vector of bools here since it is not always so! */
299  empty_nodes (communicator.size()),
300  empty_elem (communicator.size());
301  std::vector<Parallel::DofObjectKey> my_max(2);
302 
303  communicator.allgather (static_cast<unsigned short int>(my_node_bin.empty()), empty_nodes);
304  communicator.allgather (static_cast<unsigned short int>(my_elem_bin.empty()), empty_elem);
305 
306  if (!my_node_bin.empty()) my_max[0] = my_node_bin.back();
307  if (!my_elem_bin.empty()) my_max[1] = my_elem_bin.back();
308 
309  communicator.allgather (my_max, /* identical_buffer_sizes = */ true);
310 
311  // Be careful here. The *_upper_bounds will be used to find the processor
312  // a given object belongs to. So, if a processor contains no objects (possible!)
313  // then copy the bound from the lower processor id.
314  for (processor_id_type p=0; p<communicator.size(); p++)
315  {
316  node_upper_bounds[p] = my_max[2*p+0];
317  elem_upper_bounds[p] = my_max[2*p+1];
318 
319  if (p > 0) // default hilbert index value is the OK upper bound for processor 0.
320  {
321  if (empty_nodes[p]) node_upper_bounds[p] = node_upper_bounds[p-1];
322  if (empty_elem[p]) elem_upper_bounds[p] = elem_upper_bounds[p-1];
323  }
324  }
325  }
326 
327 
328 
329  //-------------------------------------------------------------
330  // (4) determine the position in the global ranking for
331  // each local object
332  {
333  //----------------------------------------------
334  // Nodes first -- all nodes, not just local ones
335  {
336  // Request sets to send to each processor
337  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
338  requested_ids;
339  // Results to gather from each processor - kept in a map so we
340  // do only one loop over nodes after all receives are done.
341  std::map<dof_id_type, std::vector<dof_id_type>>
342  filled_request;
343 
344  // build up list of requests
345  for (const auto & node : mesh.node_ptr_range())
346  {
347  libmesh_assert(node);
348  const Parallel::DofObjectKey hi =
349  get_dofobject_key (node, bbox);
350  const processor_id_type pid =
351  cast_int<processor_id_type>
352  (std::distance (node_upper_bounds.begin(),
353  std::lower_bound(node_upper_bounds.begin(),
354  node_upper_bounds.end(),
355  hi)));
356 
357  libmesh_assert_less (pid, communicator.size());
358 
359  requested_ids[pid].push_back(hi);
360  }
361 
362  // The number of objects in my_node_bin on each processor
363  std::vector<dof_id_type> node_bin_sizes(communicator.size());
364  communicator.allgather (static_cast<dof_id_type>(my_node_bin.size()), node_bin_sizes);
365 
366  // The offset of my first global index
367  dof_id_type my_offset = 0;
368  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
369  my_offset += node_bin_sizes[pid];
370 
371  auto gather_functor =
372  [
373 #ifndef NDEBUG
374  & node_upper_bounds,
375  & communicator,
376 #endif
377  & my_node_bin,
378  my_offset
379  ]
381  const std::vector<Parallel::DofObjectKey> & keys,
382  std::vector<dof_id_type> & global_ids)
383  {
384  // Fill the requests
385  const std::size_t keys_size = keys.size();
386  global_ids.reserve(keys_size);
387  for (std::size_t idx=0; idx != keys_size; idx++)
388  {
389  const Parallel::DofObjectKey & hi = keys[idx];
390  libmesh_assert_less_equal (hi, node_upper_bounds[communicator.rank()]);
391 
392  // find the requested index in my node bin
393  std::vector<Parallel::DofObjectKey>::const_iterator pos =
394  std::lower_bound (my_node_bin.begin(), my_node_bin.end(), hi);
395  libmesh_assert (pos != my_node_bin.end());
396  libmesh_assert_equal_to (*pos, hi);
397 
398  // Finally, assign the global index based off the position of the index
399  // in my array, properly offset.
400  global_ids.push_back(cast_int<dof_id_type>(std::distance(my_node_bin.begin(), pos) + my_offset));
401  }
402  };
403 
404  auto action_functor =
405  [&filled_request]
406  (processor_id_type pid,
407  const std::vector<Parallel::DofObjectKey> &,
408  const std::vector<dof_id_type> & global_ids)
409  {
410  filled_request[pid] = global_ids;
411  };
412 
413  // Trade requests with other processors
414  const dof_id_type * ex = nullptr;
416  (communicator, requested_ids, gather_functor, action_functor, ex);
417 
418  // We now have all the filled requests, so we can loop through our
419  // nodes once and assign the global index to each one.
420  {
421  std::map<dof_id_type, std::vector<dof_id_type>::const_iterator>
422  next_obj_on_proc;
423  for (auto & p : filled_request)
424  next_obj_on_proc[p.first] = p.second.begin();
425 
426  for (auto & node : mesh.node_ptr_range())
427  {
428  libmesh_assert(node);
429  const Parallel::DofObjectKey hi =
430  get_dofobject_key (node, bbox);
431  const processor_id_type pid =
432  cast_int<processor_id_type>
433  (std::distance (node_upper_bounds.begin(),
434  std::lower_bound(node_upper_bounds.begin(),
435  node_upper_bounds.end(),
436  hi)));
437 
438  libmesh_assert_less (pid, communicator.size());
439  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
440 
441  const dof_id_type global_index = *next_obj_on_proc[pid];
442  libmesh_assert_less (global_index, mesh.n_nodes());
443  node->set_id() = global_index;
444 
445  ++next_obj_on_proc[pid];
446  }
447  }
448  }
449 
450  //---------------------------------------------------
451  // elements next -- all elements, not just local ones
452  {
453  // Request sets to send to each processor
454  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
455  requested_ids;
456  // Results to gather from each processor - kept in a map so we
457  // do only one loop over elements after all receives are done.
458  std::map<dof_id_type, std::vector<dof_id_type>>
459  filled_request;
460 
461  for (const auto & elem : mesh.element_ptr_range())
462  {
463  libmesh_assert(elem);
464  const Parallel::DofObjectKey hi =
465  get_dofobject_key (elem, bbox);
466  const processor_id_type pid =
467  cast_int<processor_id_type>
468  (std::distance (elem_upper_bounds.begin(),
469  std::lower_bound(elem_upper_bounds.begin(),
470  elem_upper_bounds.end(),
471  hi)));
472 
473  libmesh_assert_less (pid, communicator.size());
474 
475  requested_ids[pid].push_back(hi);
476  }
477 
478  // The number of objects in my_elem_bin on each processor
479  std::vector<dof_id_type> elem_bin_sizes(communicator.size());
480  communicator.allgather (static_cast<dof_id_type>(my_elem_bin.size()), elem_bin_sizes);
481 
482  // The offset of my first global index
483  dof_id_type my_offset = 0;
484  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
485  my_offset += elem_bin_sizes[pid];
486 
487  auto gather_functor =
488  [
489 #ifndef NDEBUG
490  & elem_upper_bounds,
491  & communicator,
492 #endif
493  & my_elem_bin,
494  my_offset
495  ]
497  const std::vector<Parallel::DofObjectKey> & keys,
498  std::vector<dof_id_type> & global_ids)
499  {
500  // Fill the requests
501  const std::size_t keys_size = keys.size();
502  global_ids.reserve(keys_size);
503  for (std::size_t idx=0; idx != keys_size; idx++)
504  {
505  const Parallel::DofObjectKey & hi = keys[idx];
506  libmesh_assert_less_equal (hi, elem_upper_bounds[communicator.rank()]);
507 
508  // find the requested index in my elem bin
509  std::vector<Parallel::DofObjectKey>::const_iterator pos =
510  std::lower_bound (my_elem_bin.begin(), my_elem_bin.end(), hi);
511  libmesh_assert (pos != my_elem_bin.end());
512  libmesh_assert_equal_to (*pos, hi);
513 
514  // Finally, assign the global index based off the position of the index
515  // in my array, properly offset.
516  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_elem_bin.begin(), pos) + my_offset));
517  }
518  };
519 
520  auto action_functor =
521  [&filled_request]
522  (processor_id_type pid,
523  const std::vector<Parallel::DofObjectKey> &,
524  const std::vector<dof_id_type> & global_ids)
525  {
526  filled_request[pid] = global_ids;
527  };
528 
529  // Trade requests with other processors
530  const dof_id_type * ex = nullptr;
532  (communicator, requested_ids, gather_functor, action_functor, ex);
533 
534  // We now have all the filled requests, so we can loop through our
535  // elements once and assign the global index to each one.
536  {
537  std::vector<std::vector<dof_id_type>::const_iterator>
538  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
539  for (processor_id_type pid=0; pid<communicator.size(); pid++)
540  next_obj_on_proc.push_back(filled_request[pid].begin());
541 
542  for (auto & elem : mesh.element_ptr_range())
543  {
544  libmesh_assert(elem);
545  const Parallel::DofObjectKey hi =
546  get_dofobject_key (elem, bbox);
547  const processor_id_type pid =
548  cast_int<processor_id_type>
549  (std::distance (elem_upper_bounds.begin(),
550  std::lower_bound(elem_upper_bounds.begin(),
551  elem_upper_bounds.end(),
552  hi)));
553 
554  libmesh_assert_less (pid, communicator.size());
555  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
556 
557  const dof_id_type global_index = *next_obj_on_proc[pid];
558  libmesh_assert_less (global_index, mesh.n_elem());
559  elem->set_id() = global_index;
560 
561  ++next_obj_on_proc[pid];
562  }
563  }
564  }
565  }
566 }
void parallel_for(const Range &range, const Body &body)
Definition: threads_none.h:73
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
MPI_Comm communicator
Definition: communicator.h:57
virtual element_iterator local_elements_begin()=0
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
const Parallel::Communicator & comm() const
IterBase * end
Utility class for defining generic ranges for threading.
Definition: stored_range.h:52
virtual SimpleRange< element_iterator > element_ptr_range()=0
virtual node_iterator local_nodes_end()=0
void pull_parallel_vector_data(const Communicator &comm, const MapToVectors &queries, RequestContainer &reqs, GatherFunctor &gather_data, ActionFunctor &act_on_data, const datum *example)
virtual SimpleRange< node_iterator > node_ptr_range()=0
virtual element_iterator local_elements_end()=0
virtual node_iterator local_nodes_begin()=0
Object for performing parallel sorts using MPI.
Definition: parallel_sort.h:53
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:411
virtual dof_id_type n_elem() const =0
virtual dof_id_type n_nodes() const =0
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
uint8_t dof_id_type
Definition: id_types.h:64

◆ broadcast()

void libMesh::MeshCommunication::broadcast ( MeshBase mesh) const

Finds all the processors that may contain elements that neighbor my elements. This list is guaranteed to include all processors that border any of my elements, but may include additional ones as well. This method computes bounding boxes for the elements on each processor and checks for overlaps. This method takes a mesh (which is assumed to reside on processor 0) and broadcasts it to all the other processors. It also broadcasts any boundary information the mesh has associated with it.

Definition at line 1084 of file mesh_communication.C.

Referenced by libMesh::NameBasedIO::read(), and libMesh::CheckpointIO::read().

1085 {
1086  // no MPI == one processor, no need for this method...
1087  return;
1088 }

◆ check_for_duplicate_global_indices()

void libMesh::MeshCommunication::check_for_duplicate_global_indices ( MeshBase mesh) const

Throw an error if we have any index clashes in the numbering used by assign_global_indices.

Definition at line 575 of file mesh_communication_global_indices.C.

References libMesh::MeshTools::create_nodal_bounding_box(), libMesh::err, libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), mesh, and libMesh::Threads::parallel_for().

576 {
577  LOG_SCOPE ("check_for_duplicate_global_indices()", "MeshCommunication");
578 
579  // Global bounding box. We choose the nodal bounding box for
580  // backwards compatibility; the element bounding box may be looser
581  // on curved elements.
582  BoundingBox bbox =
584 
585 
586  std::vector<Parallel::DofObjectKey>
587  node_keys, elem_keys;
588 
589  {
590  // Nodes first
591  {
594  node_keys.resize (nr.size());
595  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
596 
597  // It's O(N^2) to check that these keys don't duplicate before the
598  // sort...
600  for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
601  {
603  for (std::size_t j = 0; j != i; ++j, ++nodej)
604  {
605  if (node_keys[i] == node_keys[j])
606  {
607  CFixBitVec icoords[3], jcoords[3];
608  get_hilbert_coords(**nodej, bbox, jcoords);
609  libMesh::err <<
610  "node " << (*nodej)->id() << ", " <<
611  *(Point *)(*nodej) << " has HilbertIndices " <<
612  node_keys[j] << std::endl;
613  get_hilbert_coords(**nodei, bbox, icoords);
614  libMesh::err <<
615  "node " << (*nodei)->id() << ", " <<
616  *(Point *)(*nodei) << " has HilbertIndices " <<
617  node_keys[i] << std::endl;
618  libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
619  }
620  }
621  }
622  }
623 
624  // Elements next
625  {
628  elem_keys.resize (er.size());
629  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
630 
631  // For elements, the keys can be (and in the case of TRI, are
632  // expected to be) duplicates, but only if the elements are at
633  // different levels
635  for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
636  {
638  for (std::size_t j = 0; j != i; ++j, ++elemj)
639  {
640  if ((elem_keys[i] == elem_keys[j]) &&
641  ((*elemi)->level() == (*elemj)->level()))
642  {
643  libMesh::err <<
644  "level " << (*elemj)->level() << " elem\n" <<
645  (**elemj) << " centroid " <<
646  (*elemj)->centroid() << " has HilbertIndices " <<
647  elem_keys[j] << " or " <<
648  get_dofobject_key((*elemj), bbox) <<
649  std::endl;
650  libMesh::err <<
651  "level " << (*elemi)->level() << " elem\n" <<
652  (**elemi) << " centroid " <<
653  (*elemi)->centroid() << " has HilbertIndices " <<
654  elem_keys[i] << " or " <<
655  get_dofobject_key((*elemi), bbox) <<
656  std::endl;
657  libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
658  }
659  }
660  }
661  }
662  } // done checking Hilbert keys
663 }
void parallel_for(const Range &range, const Body &body)
Definition: threads_none.h:73
virtual element_iterator local_elements_begin()=0
MeshBase & mesh
Utility class for defining generic ranges for threading.
Definition: stored_range.h:52
virtual node_iterator local_nodes_end()=0
virtual element_iterator local_elements_end()=0
OStreamProxy err(std::cerr)
virtual node_iterator local_nodes_begin()=0
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:411
A geometric point in (x,y,z) space.
Definition: point.h:38

◆ clear()

void libMesh::MeshCommunication::clear ( )

Clears all data structures and resets to a pristine state.

Definition at line 275 of file mesh_communication.C.

276 {
277  // _neighboring_processors.clear();
278 }

◆ delete_remote_elements()

void libMesh::MeshCommunication::delete_remote_elements ( DistributedMesh mesh,
const std::set< Elem *> &  extra_ghost_elem_ids 
) const

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor deletes all elements which are neither local elements nor "ghost" elements which touch local elements, and deletes all nodes which are not contained in local or ghost elements. The end result is that a previously serial DistributedMesh will be distributed between processors. Since this method is collective it must be called by all processors.

The std::set is a list of extra elements that you don't want to delete. These will be left on the current processor along with local elements and ghosted neighbors.

Definition at line 1844 of file mesh_communication.C.

References libMesh::Elem::active_family_tree(), libMesh::MeshBase::active_pid_elements_begin(), libMesh::MeshBase::active_pid_elements_end(), libMesh::as_range(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_families(), libMesh::MeshBase::delete_elem(), libMesh::MeshBase::delete_node(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::MeshBase::is_serial(), libMesh::MeshBase::level_elements_begin(), libMesh::MeshBase::level_elements_end(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libMesh::Elem::make_links_to_me_remote(), libMesh::MeshBase::max_elem_id(), libMesh::MeshBase::max_node_id(), mesh, libMesh::MeshTools::n_levels(), libMesh::MeshBase::node_ptr_range(), libMesh::MeshBase::pid_elements_begin(), libMesh::MeshBase::pid_elements_end(), libMesh::ParallelObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::reconnect_nodes(), libMesh::BoundaryInfo::regenerate_id_sets(), libMesh::Elem::subactive(), and libMesh::Parallel::Communicator::verify().

Referenced by libMesh::DistributedMesh::delete_remote_elements().

1846 {
1847  // The mesh should know it's about to be parallelized
1848  libmesh_assert (!mesh.is_serial());
1849 
1850  LOG_SCOPE("delete_remote_elements()", "MeshCommunication");
1851 
1852 #ifdef DEBUG
1853  // We expect maximum ids to be in sync so we can use them to size
1854  // vectors
1855  libmesh_assert(mesh.comm().verify(mesh.max_node_id()));
1856  libmesh_assert(mesh.comm().verify(mesh.max_elem_id()));
1857  const dof_id_type par_max_node_id = mesh.parallel_max_node_id();
1858  const dof_id_type par_max_elem_id = mesh.parallel_max_elem_id();
1859  libmesh_assert_equal_to (par_max_node_id, mesh.max_node_id());
1860  libmesh_assert_equal_to (par_max_elem_id, mesh.max_elem_id());
1861 #endif
1862 
1863  std::set<const Elem *, CompareElemIdsByLevel> elements_to_keep;
1864 
1865  // Don't delete elements that we were explicitly told not to
1866  for (const auto & elem : extra_ghost_elem_ids)
1867  {
1868  std::vector<const Elem *> active_family;
1869 #ifdef LIBMESH_ENABLE_AMR
1870  if (!elem->subactive())
1871  elem->active_family_tree(active_family);
1872  else
1873 #endif
1874  active_family.push_back(elem);
1875 
1876  for (std::size_t i=0; i != active_family.size(); ++i)
1877  elements_to_keep.insert(active_family[i]);
1878  }
1879 
1880  // See which elements we still need to keep ghosted, given that
1881  // we're keeping local and unpartitioned elements.
1883  (mesh, mesh.processor_id(),
1886  elements_to_keep);
1891  elements_to_keep);
1892 
1893  // The inactive elements we need to send should have their
1894  // immediate children present.
1897  elements_to_keep);
1901  elements_to_keep);
1902 
1903  // The elements we need should have their ancestors and their
1904  // subactive children present too.
1905  connect_families(elements_to_keep);
1906 
1907  // Don't delete nodes that our semilocal elements need
1908  std::set<const Node *> connected_nodes;
1909  reconnect_nodes(elements_to_keep, connected_nodes);
1910 
1911  // Delete all the elements we have no reason to save,
1912  // starting with the most refined so that the mesh
1913  // is valid at all intermediate steps
1914  unsigned int n_levels = MeshTools::n_levels(mesh);
1915 
1916  for (int l = n_levels - 1; l >= 0; --l)
1917  for (auto & elem : as_range(mesh.level_elements_begin(l),
1919  {
1920  libmesh_assert (elem);
1921  // Make sure we don't leave any invalid pointers
1922  const bool keep_me = elements_to_keep.count(elem);
1923 
1924  if (!keep_me)
1925  elem->make_links_to_me_remote();
1926 
1927  // delete_elem doesn't currently invalidate element
1928  // iterators... that had better not change
1929  if (!keep_me)
1930  mesh.delete_elem(elem);
1931  }
1932 
1933  // Delete all the nodes we have no reason to save
1934  for (auto & node : mesh.node_ptr_range())
1935  {
1936  libmesh_assert(node);
1937  if (!connected_nodes.count(node))
1938  mesh.delete_node(node);
1939  }
1940 
1941  // If we had a point locator, it's invalid now that some of the
1942  // elements it pointed to have been deleted.
1944 
1945  // Much of our boundary info may have been for now-remote parts of
1946  // the mesh, in which case we don't want to keep local copies.
1948 
1949  // We now have all remote elements and nodes deleted; our ghosting
1950  // functors should be ready to delete any now-redundant cached data
1951  // they use too.
1953  gf->delete_remote_elements();
1954 
1955 #ifdef DEBUG
1957 #endif
1958 }
virtual element_iterator level_elements_begin(unsigned int level)=0
void libmesh_assert_valid_refinement_tree(const MeshBase &mesh)
Definition: mesh_tools.C:2033
MeshBase & mesh
const Parallel::Communicator & comm() const
const BoundaryInfo & get_boundary_info() const
Definition: mesh_base.h:131
virtual element_iterator level_elements_end(unsigned int level)=0
virtual bool is_serial() const
Definition: mesh_base.h:154
virtual void delete_elem(Elem *e)=0
void reconnect_nodes(const std::set< const Elem *, CompareElemIdsByLevel > &connected_elements, std::set< const Node *> &connected_nodes)
static const processor_id_type invalid_processor_id
Definition: dof_object.h:358
virtual element_iterator active_pid_elements_begin(processor_id_type proc_id)=0
void connect_families(std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:653
virtual SimpleRange< node_iterator > node_ptr_range()=0
virtual dof_id_type max_elem_id() const =0
void clear_point_locator()
Definition: mesh_base.C:517
SimpleRange< I > as_range(const std::pair< I, I > &p)
Definition: simple_range.h:57
virtual void delete_node(Node *n)=0
virtual element_iterator pid_elements_begin(processor_id_type proc_id)=0
void query_ghosting_functors(const MeshBase &mesh, processor_id_type pid, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
std::set< GhostingFunctor * >::const_iterator ghosting_functors_begin() const
Definition: mesh_base.h:835
virtual element_iterator active_pid_elements_end(processor_id_type proc_id)=0
void connect_children(const MeshBase &mesh, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
virtual dof_id_type max_node_id() const =0
processor_id_type processor_id() const
std::set< GhostingFunctor * >::const_iterator ghosting_functors_end() const
Definition: mesh_base.h:841
uint8_t dof_id_type
Definition: id_types.h:64
virtual element_iterator pid_elements_end(processor_id_type proc_id)=0

◆ find_global_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_global_indices ( const Parallel::Communicator communicator,
const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::vector< dof_id_type > &  index_map 
) const

This method determines a globally unique, partition-agnostic index for each object in the input range.

Definition at line 708 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_ignore(), std::max(), libMesh::Parallel::pull_parallel_vector_data(), libMesh::Real, and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::ParmetisPartitioner::initialize(), libMesh::MetisPartitioner::partition_range(), and libMesh::Partitioner::partition_unpartitioned_elements().

713 {
714  LOG_SCOPE ("find_global_indices()", "MeshCommunication");
715 
716  // This method determines partition-agnostic global indices
717  // for nodes and elements.
718 
719  // Algorithm:
720  // (1) compute the Hilbert key for each local node/element
721  // (2) perform a parallel sort of the Hilbert key
722  // (3) get the min/max value on each processor
723  // (4) determine the position in the global ranking for
724  // each local object
725  index_map.clear();
726  std::size_t n_objects = std::distance (begin, end);
727  index_map.reserve(n_objects);
728 
729  //-------------------------------------------------------------
730  // (1) compute Hilbert keys
731  // These aren't trivial to compute, and we will need them again.
732  // But the binsort will sort the input vector, trashing the order
733  // that we'd like to rely on. So, two vectors...
734  std::vector<Parallel::DofObjectKey>
735  sorted_hilbert_keys,
736  hilbert_keys;
737  sorted_hilbert_keys.reserve(n_objects);
738  hilbert_keys.reserve(n_objects);
739  {
740  LOG_SCOPE("compute_hilbert_indices()", "MeshCommunication");
741  for (ForwardIterator it=begin; it!=end; ++it)
742  {
743  const Parallel::DofObjectKey hi(get_dofobject_key (*it, bbox));
744  hilbert_keys.push_back(hi);
745 
746  if ((*it)->processor_id() == communicator.rank())
747  sorted_hilbert_keys.push_back(hi);
748 
749  // someone needs to take care of unpartitioned objects!
750  if ((communicator.rank() == 0) &&
751  ((*it)->processor_id() == DofObject::invalid_processor_id))
752  sorted_hilbert_keys.push_back(hi);
753  }
754  }
755 
756  //-------------------------------------------------------------
757  // (2) parallel sort the Hilbert keys
758  START_LOG ("parallel_sort()", "MeshCommunication");
760  sorted_hilbert_keys);
761  sorter.sort();
762  STOP_LOG ("parallel_sort()", "MeshCommunication");
763  const std::vector<Parallel::DofObjectKey> & my_bin = sorter.bin();
764 
765  // The number of objects in my_bin on each processor
766  std::vector<unsigned int> bin_sizes(communicator.size());
767  communicator.allgather (static_cast<unsigned int>(my_bin.size()), bin_sizes);
768 
769  // The offset of my first global index
770  unsigned int my_offset = 0;
771  for (unsigned int pid=0; pid<communicator.rank(); pid++)
772  my_offset += bin_sizes[pid];
773 
774  //-------------------------------------------------------------
775  // (3) get the max value on each processor
776  std::vector<Parallel::DofObjectKey>
777  upper_bounds(1);
778 
779  if (!my_bin.empty())
780  upper_bounds[0] = my_bin.back();
781 
782  communicator.allgather (upper_bounds, /* identical_buffer_sizes = */ true);
783 
784  // Be careful here. The *_upper_bounds will be used to find the processor
785  // a given object belongs to. So, if a processor contains no objects (possible!)
786  // then copy the bound from the lower processor id.
787  for (unsigned int p=1; p<communicator.size(); p++)
788  if (!bin_sizes[p]) upper_bounds[p] = upper_bounds[p-1];
789 
790 
791  //-------------------------------------------------------------
792  // (4) determine the position in the global ranking for
793  // each local object
794  {
795  //----------------------------------------------
796  // all objects, not just local ones
797 
798  // Request sets to send to each processor
799  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
800  requested_ids;
801  // Results to gather from each processor
802  std::map<processor_id_type, std::vector<dof_id_type>>
803  filled_request;
804 
805  // build up list of requests
806  std::vector<Parallel::DofObjectKey>::const_iterator hi =
807  hilbert_keys.begin();
808 
809  for (ForwardIterator it = begin; it != end; ++it)
810  {
811  libmesh_assert (hi != hilbert_keys.end());
812 
813  std::vector<Parallel::DofObjectKey>::iterator lb =
814  std::lower_bound(upper_bounds.begin(), upper_bounds.end(),
815  *hi);
816 
817  const processor_id_type pid =
818  cast_int<processor_id_type>
819  (std::distance (upper_bounds.begin(), lb));
820 
821  libmesh_assert_less (pid, communicator.size());
822 
823  requested_ids[pid].push_back(*hi);
824 
825  ++hi;
826  // go ahead and put pid in index_map, that way we
827  // don't have to repeat the std::lower_bound()
828  index_map.push_back(pid);
829  }
830 
831  auto gather_functor =
832  [
833 #ifndef NDEBUG
834  & upper_bounds,
835  & communicator,
836 #endif
837  & bbox,
838  & my_bin,
839  my_offset
840  ]
841  (processor_id_type, const std::vector<Parallel::DofObjectKey> & keys,
842  std::vector<dof_id_type> & global_ids)
843  {
844  // Ignore unused lambda capture warnings in devel mode
845  libmesh_ignore(bbox);
846 
847  // Fill the requests
848  const std::size_t keys_size = keys.size();
849  global_ids.clear();
850  global_ids.reserve(keys_size);
851  for (std::size_t idx=0; idx != keys_size; idx++)
852  {
853  const Parallel::DofObjectKey & hilbert_indices = keys[idx];
854  libmesh_assert_less_equal (hilbert_indices, upper_bounds[communicator.rank()]);
855 
856  // find the requested index in my node bin
857  std::vector<Parallel::DofObjectKey>::const_iterator pos =
858  std::lower_bound (my_bin.begin(), my_bin.end(), hilbert_indices);
859  libmesh_assert (pos != my_bin.end());
860 #ifdef DEBUG
861  // If we could not find the requested Hilbert index in
862  // my_bin, something went terribly wrong, possibly the
863  // Mesh was displaced differently on different processors,
864  // and therefore the Hilbert indices don't agree.
865  if (*pos != hilbert_indices)
866  {
867  // The input will be hilbert_indices. We convert it
868  // to BitVecType using the operator= provided by the
869  // BitVecType class. BitVecType is a CBigBitVec!
870  Hilbert::BitVecType input;
871 #ifdef LIBMESH_ENABLE_UNIQUE_ID
872  input = hilbert_indices.first;
873 #else
874  input = hilbert_indices;
875 #endif
876 
877  // Get output in a vector of CBigBitVec
878  std::vector<CBigBitVec> output(3);
879 
880  // Call the indexToCoords function
881  Hilbert::indexToCoords(output.data(), 8*sizeof(Hilbert::inttype), 3, input);
882 
883  // The entries in the output racks are integers in the
884  // range [0, Hilbert::inttype::max] which can be
885  // converted to floating point values in [0,1] and
886  // finally to actual values using the bounding box.
887  const Real max_int_as_real =
889 
890  // Get the points in [0,1]^3. The zeroth rack of each entry in
891  // 'output' maps to the normalized x, y, and z locations,
892  // respectively.
893  Point p_hat(static_cast<Real>(output[0].racks()[0]) / max_int_as_real,
894  static_cast<Real>(output[1].racks()[0]) / max_int_as_real,
895  static_cast<Real>(output[2].racks()[0]) / max_int_as_real);
896 
897  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
898  Real
899  xmin = bbox.first(0),
900  xmax = bbox.second(0),
901  ymin = bbox.first(1),
902  ymax = bbox.second(1),
903  zmin = bbox.first(2),
904  zmax = bbox.second(2);
905 
906  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
907  Point p(xmin + (xmax-xmin)*p_hat(0),
908  ymin + (ymax-ymin)*p_hat(1),
909  zmin + (zmax-zmin)*p_hat(2));
910 
911  libmesh_error_msg("Could not find hilbert indices: "
912  << hilbert_indices
913  << " corresponding to point " << p);
914  }
915 #endif
916 
917  // Finally, assign the global index based off the position of the index
918  // in my array, properly offset.
919  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_bin.begin(), pos) + my_offset));
920  }
921  };
922 
923  auto action_functor =
924  [&filled_request]
925  (processor_id_type pid,
926  const std::vector<Parallel::DofObjectKey> &,
927  const std::vector<dof_id_type> & global_ids)
928  {
929  filled_request[pid] = global_ids;
930  };
931 
932  const dof_id_type * ex = nullptr;
934  (communicator, requested_ids, gather_functor, action_functor, ex);
935 
936  // We now have all the filled requests, so we can loop through our
937  // nodes once and assign the global index to each one.
938  {
939  std::vector<std::vector<dof_id_type>::const_iterator>
940  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
941  for (processor_id_type pid=0; pid<communicator.size(); pid++)
942  next_obj_on_proc.push_back(filled_request[pid].begin());
943 
944  unsigned int cnt=0;
945  for (ForwardIterator it = begin; it != end; ++it, cnt++)
946  {
947  const processor_id_type pid = cast_int<processor_id_type>
948  (index_map[cnt]);
949 
950  libmesh_assert_less (pid, communicator.size());
951  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
952 
953  const dof_id_type global_index = *next_obj_on_proc[pid];
954  index_map[cnt] = global_index;
955 
956  ++next_obj_on_proc[pid];
957  }
958  }
959  }
960 
961  libmesh_assert_equal_to(index_map.size(), n_objects);
962 }
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
MPI_Comm communicator
Definition: communicator.h:57
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
long double max(long double a, double b)
void libmesh_ignore(const Args &...)
void pull_parallel_vector_data(const Communicator &comm, const MapToVectors &queries, RequestContainer &reqs, GatherFunctor &gather_data, ActionFunctor &act_on_data, const datum *example)
static const processor_id_type invalid_processor_id
Definition: dof_object.h:358
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real
Object for performing parallel sorts using MPI.
Definition: parallel_sort.h:53
A geometric point in (x,y,z) space.
Definition: point.h:38
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
uint8_t dof_id_type
Definition: id_types.h:64

◆ find_local_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_local_indices ( const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::unordered_map< dof_id_type, dof_id_type > &  index_map 
) const

This method determines a locally unique, contiguous index for each object in the input range.

Definition at line 672 of file mesh_communication_global_indices.C.

References end.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map().

676 {
677  LOG_SCOPE ("find_local_indices()", "MeshCommunication");
678 
679  // This method determines id-agnostic local indices
680  // for nodes and elements by sorting Hilbert keys.
681 
682  index_map.clear();
683 
684  //-------------------------------------------------------------
685  // (1) compute Hilbert keys
686  // These aren't trivial to compute, and we will need them again.
687  // But the binsort will sort the input vector, trashing the order
688  // that we'd like to rely on. So, two vectors...
689  std::map<Parallel::DofObjectKey, dof_id_type> hilbert_keys;
690  {
691  LOG_SCOPE("local_hilbert_indices", "MeshCommunication");
692  for (ForwardIterator it=begin; it!=end; ++it)
693  {
694  const Parallel::DofObjectKey hi(get_dofobject_key ((*it), bbox));
695  hilbert_keys.emplace(hi, (*it)->id());
696  }
697  }
698 
699  {
700  dof_id_type cnt = 0;
701  for (auto key_val : hilbert_keys)
702  index_map[key_val.second] = cnt++;
703  }
704 }
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
IterBase * end
uint8_t dof_id_type
Definition: id_types.h:64

◆ gather()

void libMesh::MeshCommunication::gather ( const processor_id_type  root_id,
DistributedMesh mesh 
) const

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor then sends its local nodes and elements to processor root_id. The end result is that a previously distributed DistributedMesh will be serialized on processor root_id. Since this method is collective it must be called by all processors. For the special case of root_id equal to DofObject::invalid_processor_id this function performs an allgather.

Definition at line 1156 of file mesh_communication.C.

Referenced by allgather(), and libMesh::DistributedMesh::gather_to_zero().

1157 {
1158  // no MPI == one processor, no need for this method...
1159  return;
1160 }

◆ gather_neighboring_elements()

void libMesh::MeshCommunication::gather_neighboring_elements ( DistributedMesh mesh) const

Definition at line 540 of file mesh_communication.C.

Referenced by libMesh::Nemesis_IO::read().

541 {
542  // no MPI == one processor, no need for this method...
543  return;
544 }

◆ make_elems_parallel_consistent()

void libMesh::MeshCommunication::make_elems_parallel_consistent ( MeshBase mesh)

Copy ids of ghost elements from their local processors.

Definition at line 1505 of file mesh_communication.C.

References libMesh::MeshBase::active_elements_begin(), libMesh::MeshBase::active_elements_end(), libMesh::ParallelObject::comm(), mesh, libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Parallel::sync_element_data_by_parent_id().

Referenced by libMesh::MeshRefinement::_refine_elements().

1506 {
1507  // This function must be run on all processors at once
1508  libmesh_parallel_only(mesh.comm());
1509 
1510  LOG_SCOPE ("make_elems_parallel_consistent()", "MeshCommunication");
1511 
1512  SyncIds syncids(mesh, &MeshBase::renumber_elem);
1515  mesh.active_elements_end(), syncids);
1516 
1517 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1518  SyncUniqueIds<Elem> syncuniqueids(mesh, &MeshBase::query_elem_ptr);
1521  mesh.active_elements_end(), syncuniqueids);
1522 #endif
1523 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
virtual element_iterator active_elements_begin()=0
virtual element_iterator active_elements_end()=0
virtual const Elem * query_elem_ptr(const dof_id_type i) const =0
virtual void renumber_elem(dof_id_type old_id, dof_id_type new_id)=0
void sync_element_data_by_parent_id(MeshBase &mesh, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)

◆ make_new_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.

Definition at line 1685 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::element_ptr_range(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), libMesh::MeshTools::libmesh_assert_parallel_consistent_new_node_procids(), libMesh::MeshTools::libmesh_assert_parallel_consistent_procids< Node >(), mesh, std::min(), libMesh::Elem::node_ref(), libMesh::Elem::node_ref_range(), libMesh::MeshBase::not_local_elements_begin(), libMesh::MeshBase::not_local_elements_end(), libMesh::DofObject::processor_id(), libMesh::Parallel::sync_node_data_by_element_id(), and libMesh::Parallel::sync_node_data_by_element_id_once().

1686 {
1687  LOG_SCOPE ("make_new_node_proc_ids_parallel_consistent()", "MeshCommunication");
1688 
1689  // This function must be run on all processors at once
1690  libmesh_parallel_only(mesh.comm());
1691 
1692  // When this function is called, each section of a parallelized mesh
1693  // should be in the following state:
1694  //
1695  // Local nodes should have unique authoritative ids,
1696  // and new nodes should be unpartitioned.
1697  //
1698  // New ghost nodes touching local elements should be unpartitioned.
1699 
1700  // We may not have consistent processor ids for new nodes (because a
1701  // node may be old and partitioned on one processor but new and
1702  // unpartitioned on another) when we start
1703 #ifdef DEBUG
1705  // MeshTools::libmesh_assert_parallel_consistent_new_node_procids(mesh);
1706 #endif
1707 
1708  // We have two kinds of new nodes. *NEW* nodes are unpartitioned on
1709  // all processors: we need to use a id-independent (i.e. dumb)
1710  // heuristic to partition them. But "new" nodes are newly created
1711  // on some processors (when ghost elements are refined) yet
1712  // correspond to existing nodes on other processors: we need to use
1713  // the existing processor id for them.
1714  //
1715  // A node which is "new" on one processor will be associated with at
1716  // least one ghost element, and we can just query that ghost
1717  // element's owner to find out the correct processor id.
1718 
1719  auto node_unpartitioned =
1720  [](const Elem * elem, unsigned int local_node_num)
1721  { return elem->node_ref(local_node_num).processor_id() ==
1723 
1724  SyncProcIds sync(mesh);
1725 
1729  node_unpartitioned, sync);
1730 
1731  // Nodes should now be unpartitioned iff they are truly new; those
1732  // are the *only* nodes we will touch.
1733 #ifdef DEBUG
1735 #endif
1736 
1737  NodeWasNew node_was_new(mesh);
1738 
1739  // Set the lowest processor id we can on truly new nodes
1740  for (auto & elem : mesh.element_ptr_range())
1741  for (auto & node : elem->node_ref_range())
1742  if (node_was_new.was_new.count(&node))
1743  {
1744  processor_id_type & pid = node.processor_id();
1745  pid = std::min(pid, elem->processor_id());
1746  }
1747 
1748  // Then finally see if other processors have a lower option
1751  ElemNodesMaybeNew(), node_was_new, sync);
1752 
1753  // We should have consistent processor ids when we're done.
1754 #ifdef DEBUG
1757 #endif
1758 }
virtual element_iterator not_local_elements_end()=0
The base class for all geometric element types.
Definition: elem.h:100
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
void libmesh_assert_parallel_consistent_new_node_procids(const MeshBase &mesh)
Definition: mesh_tools.C:1877
const Parallel::Communicator & comm() const
virtual element_iterator elements_begin()=0
virtual SimpleRange< element_iterator > element_ptr_range()=0
const Node & node_ref(const unsigned int i) const
Definition: elem.h:1979
static const processor_id_type invalid_processor_id
Definition: dof_object.h:358
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0
void libmesh_assert_parallel_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:1915
SimpleRange< NodeRefIter > node_ref_range()
Definition: elem.h:2130
bool sync_node_data_by_element_id_once(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator not_local_elements_begin()=0
processor_id_type processor_id() const
Definition: dof_object.h:717
long double min(long double a, double b)

◆ make_new_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_new_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on new nodes from their local processors.

Definition at line 1805 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), and mesh.

Referenced by libMesh::MeshRefinement::_refine_elements().

1806 {
1807  // This function must be run on all processors at once
1808  libmesh_parallel_only(mesh.comm());
1809 
1810  // When this function is called, each section of a parallelized mesh
1811  // should be in the following state:
1812  //
1813  // All nodes should have the exact same physical location on every
1814  // processor where they exist.
1815  //
1816  // Local nodes should have unique authoritative ids,
1817  // and new nodes should be unpartitioned.
1818  //
1819  // New ghost nodes touching local elements should be unpartitioned.
1820  //
1821  // New ghost nodes should have ids which are either already correct
1822  // or which are in the "unpartitioned" id space.
1823  //
1824  // Non-new nodes should have correct ids and processor ids already.
1825 
1826  // First, let's sync up new nodes' processor ids.
1827 
1829 
1830  // Second, sync up dofobject ids.
1832 
1833  // Third, sync up dofobject unique_ids if applicable.
1835 
1836  // Finally, correct the processor ids to make DofMap happy
1838 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
void correct_node_proc_ids(MeshBase &)
Definition: mesh_tools.C:2259
MeshBase & mesh
const Parallel::Communicator & comm() const
void make_new_node_proc_ids_parallel_consistent(MeshBase &)
void make_node_ids_parallel_consistent(MeshBase &)

◆ make_node_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_ids_parallel_consistent ( MeshBase mesh)

Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent.

Definition at line 1452 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), libMesh::MeshTools::libmesh_assert_topology_consistent_procids< Node >(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

1453 {
1454  // This function must be run on all processors at once
1455  libmesh_parallel_only(mesh.comm());
1456 
1457  // We need to agree on which processor owns every node, but we can't
1458  // easily assert that here because we don't currently agree on which
1459  // id every node has, and some of our temporary ids on unrelated
1460  // nodes will "overlap".
1461 //#ifdef DEBUG
1462 // MeshTools::libmesh_assert_parallel_consistent_procids<Node> (mesh);
1463 //#endif // DEBUG
1464 
1465  LOG_SCOPE ("make_node_ids_parallel_consistent()", "MeshCommunication");
1466 
1467  SyncNodeIds syncids(mesh);
1471 
1472  // At this point, with both ids and processor ids synced, we can
1473  // finally check for topological consistency of node processor ids.
1474 #ifdef DEBUG
1476 #endif
1477 }
void libmesh_assert_topology_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:1826
MeshBase & mesh
const Parallel::Communicator & comm() const
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0

◆ make_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

Definition at line 1656 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

1657 {
1658  LOG_SCOPE ("make_node_proc_ids_parallel_consistent()", "MeshCommunication");
1659 
1660  // This function must be run on all processors at once
1661  libmesh_parallel_only(mesh.comm());
1662 
1663  // When this function is called, each section of a parallelized mesh
1664  // should be in the following state:
1665  //
1666  // All nodes should have the exact same physical location on every
1667  // processor where they exist.
1668  //
1669  // Local nodes should have unique authoritative ids,
1670  // and processor ids consistent with all processors which own
1671  // an element touching them.
1672  //
1673  // Ghost nodes touching local elements should have processor ids
1674  // consistent with all processors which own an element touching
1675  // them.
1676  SyncProcIds sync(mesh);
1680 }
MeshBase & mesh
const Parallel::Communicator & comm() const
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0

◆ make_node_unique_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent ( MeshBase mesh)

Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent.

Definition at line 1481 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), mesh, libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::BoundaryInfo::add_elements(), and libMesh::Nemesis_IO::read().

1482 {
1483  // Avoid unused variable warnings if unique ids aren't enabled.
1485 
1486  // This function must be run on all processors at once
1487  libmesh_parallel_only(mesh.comm());
1488 
1489 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1490  LOG_SCOPE ("make_node_unique_ids_parallel_consistent()", "MeshCommunication");
1491 
1492  SyncUniqueIds<Node> syncuniqueids(mesh, &MeshBase::query_node_ptr);
1494  mesh.nodes_begin(),
1495  mesh.nodes_end(),
1496  syncuniqueids);
1497 
1498 #endif
1499 }
MeshBase & mesh
const Parallel::Communicator & comm() const
virtual node_iterator nodes_begin()=0
void libmesh_ignore(const Args &...)
virtual const Node * query_node_ptr(const dof_id_type i) const =0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
virtual node_iterator nodes_end()=0

◆ make_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on ghost nodes from their local processors. This is useful for code which wants to add nodes to a distributed mesh.

Definition at line 1763 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), and mesh.

Referenced by libMesh::MeshRefinement::_coarsen_elements(), libMesh::UnstructuredMesh::all_second_order(), and libMesh::MeshTools::Modification::all_tri().

1764 {
1765  // This function must be run on all processors at once
1766  libmesh_parallel_only(mesh.comm());
1767 
1768  // When this function is called, each section of a parallelized mesh
1769  // should be in the following state:
1770  //
1771  // All nodes should have the exact same physical location on every
1772  // processor where they exist.
1773  //
1774  // Local nodes should have unique authoritative ids,
1775  // and processor ids consistent with all processors which own
1776  // an element touching them.
1777  //
1778  // Ghost nodes touching local elements should have processor ids
1779  // consistent with all processors which own an element touching
1780  // them.
1781  //
1782  // Ghost nodes should have ids which are either already correct
1783  // or which are in the "unpartitioned" id space.
1784 
1785  // First, let's sync up processor ids. Some of these processor ids
1786  // may be "wrong" from coarsening, but they're right in the sense
1787  // that they'll tell us who has the authoritative dofobject ids for
1788  // each node.
1789 
1791 
1792  // Second, sync up dofobject ids.
1794 
1795  // Third, sync up dofobject unique_ids if applicable.
1797 
1798  // Finally, correct the processor ids to make DofMap happy
1800 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
void correct_node_proc_ids(MeshBase &)
Definition: mesh_tools.C:2259
MeshBase & mesh
const Parallel::Communicator & comm() const
void make_node_ids_parallel_consistent(MeshBase &)
void make_node_proc_ids_parallel_consistent(MeshBase &)

◆ make_p_levels_parallel_consistent()

void libMesh::MeshCommunication::make_p_levels_parallel_consistent ( MeshBase mesh)

Copy p levels of ghost elements from their local processors.

Definition at line 1529 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), mesh, and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), and libMesh::MeshRefinement::_refine_elements().

1530 {
1531  // This function must be run on all processors at once
1532  libmesh_parallel_only(mesh.comm());
1533 
1534  LOG_SCOPE ("make_p_levels_parallel_consistent()", "MeshCommunication");
1535 
1536  SyncPLevels syncplevels(mesh);
1539  syncplevels);
1540 }
MeshBase & mesh
const Parallel::Communicator & comm() const
virtual element_iterator elements_begin()=0
virtual element_iterator elements_end()=0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)

◆ redistribute()

void libMesh::MeshCommunication::redistribute ( DistributedMesh mesh,
bool  newly_coarsened_only = false 
) const

This method takes a parallel distributed mesh and redistributes the elements. Specifically, any elements stored on a given processor are sent to the processor which "owns" them. Similarly, any elements assigned to the current processor but stored on another are received. Once this step is completed any required ghost elements are updated. The final result is that each processor stores only the elements it actually owns and any ghost elements required to satisfy data dependencies. This method can be invoked after a partitioning step to affect the new partitioning.

Redistribution can also be done with newly coarsened elements' neighbors only.

Definition at line 284 of file mesh_communication.C.

Referenced by libMesh::DistributedMesh::redistribute().

285 {
286  // no MPI == one processor, no redistribution
287  return;
288 }

◆ send_coarse_ghosts()

void libMesh::MeshCommunication::send_coarse_ghosts ( MeshBase mesh) const

Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them.

Definition at line 915 of file mesh_communication.C.

Referenced by libMesh::MeshRefinement::_coarsen_elements().

916 {
917  // no MPI == one processor, no need for this method...
918  return;
919 }

The documentation for this class was generated from the following files: