libMesh::MeshCommunication Class Reference

#include <mesh_communication.h>

Public Member Functions

 MeshCommunication ()
 
 ~MeshCommunication ()
 
void clear ()
 
void broadcast (MeshBase &) const
 
void redistribute (DistributedMesh &mesh, bool newly_coarsened_only=false) const
 
void gather_neighboring_elements (DistributedMesh &) const
 
void send_coarse_ghosts (MeshBase &) const
 
void gather (const processor_id_type root_id, DistributedMesh &) const
 
void allgather (DistributedMesh &mesh) const
 
void delete_remote_elements (DistributedMesh &, const std::set< Elem * > &) const
 
void assign_global_indices (MeshBase &) const
 
void check_for_duplicate_global_indices (MeshBase &) const
 
template<typename ForwardIterator >
void find_global_indices (const Parallel::Communicator &communicator, const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
 
void make_elems_parallel_consistent (MeshBase &)
 
void make_p_levels_parallel_consistent (MeshBase &)
 
void make_node_ids_parallel_consistent (MeshBase &)
 
void make_node_unique_ids_parallel_consistent (MeshBase &)
 
void make_node_proc_ids_parallel_consistent (MeshBase &)
 
void make_new_node_proc_ids_parallel_consistent (MeshBase &)
 
void make_nodes_parallel_consistent (MeshBase &)
 
void make_new_nodes_parallel_consistent (MeshBase &)
 

Detailed Description

This is the MeshCommunication class. It handles all the details of communicating mesh information from one processor to another. All parallelization of the Mesh data structures is done via this class.

Author
Benjamin S. Kirk
Date
2003

Definition at line 47 of file mesh_communication.h.

Constructor & Destructor Documentation

libMesh::MeshCommunication::MeshCommunication ( )
inline

Constructor.

Definition at line 54 of file mesh_communication.h.

54 {}
libMesh::MeshCommunication::~MeshCommunication ( )
inline

Destructor.

Definition at line 59 of file mesh_communication.h.

References broadcast(), clear(), gather(), gather_neighboring_elements(), mesh, redistribute(), and send_coarse_ghosts().

59 {}

Member Function Documentation

void libMesh::MeshCommunication::allgather ( DistributedMesh mesh) const
inline

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor then sends its local nodes and elements to the other processors. The end result is that a previously distributed DistributedMesh will be serialized on each processor. Since this method is collective it must be called by all processors.

Definition at line 132 of file mesh_communication.h.

References assign_global_indices(), check_for_duplicate_global_indices(), libMesh::connect_children(), libMesh::connect_families(), delete_remote_elements(), find_global_indices(), gather(), libMesh::DofObject::invalid_processor_id, make_elems_parallel_consistent(), make_new_node_proc_ids_parallel_consistent(), make_new_nodes_parallel_consistent(), make_node_ids_parallel_consistent(), make_node_proc_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), make_nodes_parallel_consistent(), make_p_levels_parallel_consistent(), mesh, libMesh::query_ghosting_functors(), and libMesh::reconnect_nodes().

Referenced by libMesh::DistributedMesh::allgather().

MeshBase & mesh
void gather(const processor_id_type root_id, DistributedMesh &) const
static const processor_id_type invalid_processor_id
Definition: dof_object.h:345
void libMesh::MeshCommunication::assign_global_indices ( MeshBase mesh) const

This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh. The approach is to compute the Hilbert space-filling curve key and use its value to assign an index in [0,N_global). Since the Hilbert key is unique for each spatial location, two objects occupying the same location will be assigned the same global id. Thus, this method can also be useful for identifying duplicate nodes which may occur during parallel refinement.

Definition at line 169 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), libMesh::ParallelObject::comm(), libMesh::MeshTools::create_nodal_bounding_box(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::libmesh_assert(), libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), libMesh::MeshBase::n_elem(), libMesh::MeshBase::n_nodes(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::Threads::parallel_for(), libMesh::DofObject::set_id(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by allgather(), and libMesh::MeshTools::Private::globally_renumber_nodes_and_elements().

170 {
171  LOG_SCOPE ("assign_global_indices()", "MeshCommunication");
172 
173  // This method determines partition-agnostic global indices
174  // for nodes and elements.
175 
176  // Algorithm:
177  // (1) compute the Hilbert key for each local node/element
178  // (2) perform a parallel sort of the Hilbert key
179  // (3) get the min/max value on each processor
180  // (4) determine the position in the global ranking for
181  // each local object
182 
183  const Parallel::Communicator & communicator (mesh.comm());
184 
185  // Global bounding box. We choose the nodal bounding box for
186  // backwards compatibility; the element bounding box may be looser
187  // on curved elements.
188  BoundingBox bbox =
190 
191  //-------------------------------------------------------------
192  // (1) compute Hilbert keys
193  std::vector<Parallel::DofObjectKey>
194  node_keys, elem_keys;
195 
196  {
197  // Nodes first
198  {
200  mesh.local_nodes_end());
201  node_keys.resize (nr.size());
202  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
203 
204  // // It's O(N^2) to check that these keys don't duplicate before the
205  // // sort...
206  // MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
207  // for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
208  // {
209  // MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
210  // for (std::size_t j = 0; j != i; ++j, ++nodej)
211  // {
212  // if (node_keys[i] == node_keys[j])
213  // {
214  // CFixBitVec icoords[3], jcoords[3];
215  // get_hilbert_coords(**nodej, bbox, jcoords);
216  // libMesh::err <<
217  // "node " << (*nodej)->id() << ", " <<
218  // *(Point *)(*nodej) << " has HilbertIndices " <<
219  // node_keys[j] << std::endl;
220  // get_hilbert_coords(**nodei, bbox, icoords);
221  // libMesh::err <<
222  // "node " << (*nodei)->id() << ", " <<
223  // *(Point *)(*nodei) << " has HilbertIndices " <<
224  // node_keys[i] << std::endl;
225  // libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
226  // }
227  // }
228  // }
229  }
230 
231  // Elements next
232  {
234  mesh.local_elements_end());
235  elem_keys.resize (er.size());
236  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
237 
238  // // For elements, the keys can be (and in the case of TRI, are
239  // // expected to be) duplicates, but only if the elements are at
240  // // different levels
241  // MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
242  // for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
243  // {
244  // MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
245  // for (std::size_t j = 0; j != i; ++j, ++elemj)
246  // {
247  // if ((elem_keys[i] == elem_keys[j]) &&
248  // ((*elemi)->level() == (*elemj)->level()))
249  // {
250  // libMesh::err <<
251  // "level " << (*elemj)->level() << " elem\n" <<
252  // (**elemj) << " centroid " <<
253  // (*elemj)->centroid() << " has HilbertIndices " <<
254  // elem_keys[j] << " or " <<
255  // get_hilbert_index((*elemj), bbox) <<
256  // std::endl;
257  // libMesh::err <<
258  // "level " << (*elemi)->level() << " elem\n" <<
259  // (**elemi) << " centroid " <<
260  // (*elemi)->centroid() << " has HilbertIndices " <<
261  // elem_keys[i] << " or " <<
262  // get_hilbert_index((*elemi), bbox) <<
263  // std::endl;
264  // libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
265  // }
266  // }
267  // }
268  }
269  } // done computing Hilbert keys
270 
271 
272 
273  //-------------------------------------------------------------
274  // (2) parallel sort the Hilbert keys
276  node_keys);
277  node_sorter.sort(); /* done with node_keys */ //node_keys.clear();
278 
279  const std::vector<Parallel::DofObjectKey> & my_node_bin =
280  node_sorter.bin();
281 
283  elem_keys);
284  elem_sorter.sort(); /* done with elem_keys */ //elem_keys.clear();
285 
286  const std::vector<Parallel::DofObjectKey> & my_elem_bin =
287  elem_sorter.bin();
288 
289 
290 
291  //-------------------------------------------------------------
292  // (3) get the max value on each processor
293  std::vector<Parallel::DofObjectKey>
294  node_upper_bounds(communicator.size()),
295  elem_upper_bounds(communicator.size());
296 
297  { // limit scope of temporaries
298  std::vector<Parallel::DofObjectKey> recvbuf(2*communicator.size());
299  std::vector<unsigned short int> /* do not use a vector of bools here since it is not always so! */
300  empty_nodes (communicator.size()),
301  empty_elem (communicator.size());
302  std::vector<Parallel::DofObjectKey> my_max(2);
303 
304  communicator.allgather (static_cast<unsigned short int>(my_node_bin.empty()), empty_nodes);
305  communicator.allgather (static_cast<unsigned short int>(my_elem_bin.empty()), empty_elem);
306 
307  if (!my_node_bin.empty()) my_max[0] = my_node_bin.back();
308  if (!my_elem_bin.empty()) my_max[1] = my_elem_bin.back();
309 
310  communicator.allgather (my_max, /* identical_buffer_sizes = */ true);
311 
312  // Be careful here. The *_upper_bounds will be used to find the processor
313  // a given object belongs to. So, if a processor contains no objects (possible!)
314  // then copy the bound from the lower processor id.
315  for (processor_id_type p=0; p<communicator.size(); p++)
316  {
317  node_upper_bounds[p] = my_max[2*p+0];
318  elem_upper_bounds[p] = my_max[2*p+1];
319 
320  if (p > 0) // default hilbert index value is the OK upper bound for processor 0.
321  {
322  if (empty_nodes[p]) node_upper_bounds[p] = node_upper_bounds[p-1];
323  if (empty_elem[p]) elem_upper_bounds[p] = elem_upper_bounds[p-1];
324  }
325  }
326  }
327 
328 
329 
330  //-------------------------------------------------------------
331  // (4) determine the position in the global ranking for
332  // each local object
333  {
334  //----------------------------------------------
335  // Nodes first -- all nodes, not just local ones
336  {
337  // Request sets to send to each processor
338  std::vector<std::vector<Parallel::DofObjectKey> >
339  requested_ids (communicator.size());
340  // Results to gather from each processor
341  std::vector<std::vector<dof_id_type> >
342  filled_request (communicator.size());
343 
344  {
347 
348  // build up list of requests
349  for (; it != end; ++it)
350  {
351  const Node * node = (*it);
352  libmesh_assert(node);
353  const Parallel::DofObjectKey hi =
354  get_hilbert_index (node, bbox);
355  const processor_id_type pid =
356  cast_int<processor_id_type>
357  (std::distance (node_upper_bounds.begin(),
358  std::lower_bound(node_upper_bounds.begin(),
359  node_upper_bounds.end(),
360  hi)));
361 
362  libmesh_assert_less (pid, communicator.size());
363 
364  requested_ids[pid].push_back(hi);
365  }
366  }
367 
368  // The number of objects in my_node_bin on each processor
369  std::vector<dof_id_type> node_bin_sizes(communicator.size());
370  communicator.allgather (static_cast<dof_id_type>(my_node_bin.size()), node_bin_sizes);
371 
372  // The offset of my first global index
373  dof_id_type my_offset = 0;
374  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
375  my_offset += node_bin_sizes[pid];
376 
377  // start with pid=0, so that we will trade with ourself
378  for (processor_id_type pid=0; pid<communicator.size(); pid++)
379  {
380  // Trade my requests with processor procup and procdown
381  const processor_id_type procup = cast_int<processor_id_type>
382  ((communicator.rank() + pid) % communicator.size());
383  const processor_id_type procdown = cast_int<processor_id_type>
384  ((communicator.size() + communicator.rank() - pid) %
385  communicator.size());
386 
387  std::vector<Parallel::DofObjectKey> request_to_fill;
388  communicator.send_receive(procup, requested_ids[procup],
389  procdown, request_to_fill);
390 
391  // Fill the requests
392  std::vector<dof_id_type> global_ids; global_ids.reserve(request_to_fill.size());
393  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
394  {
395  const Parallel::DofObjectKey & hi = request_to_fill[idx];
396  libmesh_assert_less_equal (hi, node_upper_bounds[communicator.rank()]);
397 
398  // find the requested index in my node bin
399  std::vector<Parallel::DofObjectKey>::const_iterator pos =
400  std::lower_bound (my_node_bin.begin(), my_node_bin.end(), hi);
401  libmesh_assert (pos != my_node_bin.end());
402  libmesh_assert_equal_to (*pos, hi);
403 
404  // Finally, assign the global index based off the position of the index
405  // in my array, properly offset.
406  global_ids.push_back(cast_int<dof_id_type>(std::distance(my_node_bin.begin(), pos) + my_offset));
407  }
408 
409  // and trade back
410  communicator.send_receive (procdown, global_ids,
411  procup, filled_request[procup]);
412  }
413 
414  // We now have all the filled requests, so we can loop through our
415  // nodes once and assign the global index to each one.
416  {
417  std::vector<std::vector<dof_id_type>::const_iterator>
418  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
419  for (processor_id_type pid=0; pid<communicator.size(); pid++)
420  next_obj_on_proc.push_back(filled_request[pid].begin());
421 
422  {
424  const MeshBase::node_iterator end = mesh.nodes_end();
425 
426  for (; it != end; ++it)
427  {
428  Node * node = (*it);
429  libmesh_assert(node);
430  const Parallel::DofObjectKey hi =
431  get_hilbert_index (node, bbox);
432  const processor_id_type pid =
433  cast_int<processor_id_type>
434  (std::distance (node_upper_bounds.begin(),
435  std::lower_bound(node_upper_bounds.begin(),
436  node_upper_bounds.end(),
437  hi)));
438 
439  libmesh_assert_less (pid, communicator.size());
440  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
441 
442  const dof_id_type global_index = *next_obj_on_proc[pid];
443  libmesh_assert_less (global_index, mesh.n_nodes());
444  node->set_id() = global_index;
445 
446  ++next_obj_on_proc[pid];
447  }
448  }
449  }
450  }
451 
452  //---------------------------------------------------
453  // elements next -- all elements, not just local ones
454  {
455  // Request sets to send to each processor
456  std::vector<std::vector<Parallel::DofObjectKey> >
457  requested_ids (communicator.size());
458  // Results to gather from each processor
459  std::vector<std::vector<dof_id_type> >
460  filled_request (communicator.size());
461 
462  {
465 
466  for (; it != end; ++it)
467  {
468  const Elem * elem = (*it);
469  libmesh_assert(elem);
470  const Parallel::DofObjectKey hi =
471  get_hilbert_index (elem, bbox);
472  const processor_id_type pid =
473  cast_int<processor_id_type>
474  (std::distance (elem_upper_bounds.begin(),
475  std::lower_bound(elem_upper_bounds.begin(),
476  elem_upper_bounds.end(),
477  hi)));
478 
479  libmesh_assert_less (pid, communicator.size());
480 
481  requested_ids[pid].push_back(hi);
482  }
483  }
484 
485  // The number of objects in my_elem_bin on each processor
486  std::vector<dof_id_type> elem_bin_sizes(communicator.size());
487  communicator.allgather (static_cast<dof_id_type>(my_elem_bin.size()), elem_bin_sizes);
488 
489  // The offset of my first global index
490  dof_id_type my_offset = 0;
491  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
492  my_offset += elem_bin_sizes[pid];
493 
494  // start with pid=0, so that we will trade with ourself
495  for (processor_id_type pid=0; pid<communicator.size(); pid++)
496  {
497  // Trade my requests with processor procup and procdown
498  const processor_id_type procup = cast_int<processor_id_type>
499  ((communicator.rank() + pid) % communicator.size());
500  const processor_id_type procdown = cast_int<processor_id_type>
501  ((communicator.size() + communicator.rank() - pid) %
502  communicator.size());
503 
504  std::vector<Parallel::DofObjectKey> request_to_fill;
505  communicator.send_receive(procup, requested_ids[procup],
506  procdown, request_to_fill);
507 
508  // Fill the requests
509  std::vector<dof_id_type> global_ids; global_ids.reserve(request_to_fill.size());
510  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
511  {
512  const Parallel::DofObjectKey & hi = request_to_fill[idx];
513  libmesh_assert_less_equal (hi, elem_upper_bounds[communicator.rank()]);
514 
515  // find the requested index in my elem bin
516  std::vector<Parallel::DofObjectKey>::const_iterator pos =
517  std::lower_bound (my_elem_bin.begin(), my_elem_bin.end(), hi);
518  libmesh_assert (pos != my_elem_bin.end());
519  libmesh_assert_equal_to (*pos, hi);
520 
521  // Finally, assign the global index based off the position of the index
522  // in my array, properly offset.
523  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_elem_bin.begin(), pos) + my_offset));
524  }
525 
526  // and trade back
527  communicator.send_receive (procdown, global_ids,
528  procup, filled_request[procup]);
529  }
530 
531  // We now have all the filled requests, so we can loop through our
532  // elements once and assign the global index to each one.
533  {
534  std::vector<std::vector<dof_id_type>::const_iterator>
535  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
536  for (processor_id_type pid=0; pid<communicator.size(); pid++)
537  next_obj_on_proc.push_back(filled_request[pid].begin());
538 
539  {
541  const MeshBase::element_iterator end = mesh.elements_end();
542 
543  for (; it != end; ++it)
544  {
545  Elem * elem = (*it);
546  libmesh_assert(elem);
547  const Parallel::DofObjectKey hi =
548  get_hilbert_index (elem, bbox);
549  const processor_id_type pid =
550  cast_int<processor_id_type>
551  (std::distance (elem_upper_bounds.begin(),
552  std::lower_bound(elem_upper_bounds.begin(),
553  elem_upper_bounds.end(),
554  hi)));
555 
556  libmesh_assert_less (pid, communicator.size());
557  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
558 
559  const dof_id_type global_index = *next_obj_on_proc[pid];
560  libmesh_assert_less (global_index, mesh.n_elem());
561  elem->set_id() = global_index;
562 
563  ++next_obj_on_proc[pid];
564  }
565  }
566  }
567  }
568  }
569 }
A geometric point in (x,y,z) space associated with a DOF.
Definition: node.h:52
void parallel_for(const Range &range, const Body &body)
Definition: threads_none.h:73
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
MPI_Comm communicator
Definition: parallel.h:177
virtual element_iterator local_elements_begin()=0
The base class for all geometric element types.
Definition: elem.h:86
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
Utility class for defining generic ranges for threading.
Definition: stored_range.h:52
dof_id_type & set_id()
Definition: dof_object.h:633
libmesh_assert(j)
virtual node_iterator nodes_begin()=0
virtual element_iterator elements_begin()=0
virtual node_iterator local_nodes_end()=0
virtual element_iterator elements_end()=0
virtual element_iterator local_elements_end()=0
virtual node_iterator local_nodes_begin()=0
virtual node_iterator nodes_end()=0
Object for performing parallel sorts using MPI.
Definition: parallel_sort.h:53
const Parallel::Communicator & comm() const
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:366
virtual dof_id_type n_nodes() const =0
virtual dof_id_type n_elem() const =0
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::MeshCommunication::broadcast ( MeshBase mesh) const

Finds all the processors that may contain elements that neighbor my elements. This list is guaranteed to include all processors that border any of my elements, but may include additional ones as well. This method computes bounding boxes for the elements on each processor and checks for overlaps. This method takes a mesh (which is assumed to reside on processor 0) and broadcasts it to all the other processors. It also broadcasts any boundary information the mesh has associated with it.

Definition at line 1100 of file mesh_communication.C.

References libMesh::Parallel::broadcast(), libMesh::Parallel::Communicator::broadcast(), libMesh::Parallel::Communicator::broadcast_packed_range(), libMesh::MeshBase::cache_elem_dims(), libMesh::MeshBase::clear(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::level_elements_begin(), libMesh::MeshBase::level_elements_end(), libMesh::libmesh_assert(), mesh, libMesh::MeshBase::n_elem(), libMesh::MeshTools::n_levels(), libMesh::MeshBase::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::MeshTools::paranoid_n_levels(), libMesh::ParallelObject::processor_id(), libMesh::BoundaryInfo::set_nodeset_name_map(), libMesh::BoundaryInfo::set_sideset_name_map(), libMesh::MeshBase::set_subdomain_name_map(), and libMesh::Parallel::Communicator::verify().

Referenced by libMesh::NameBasedIO::read(), libMesh::CheckpointIO::read(), and ~MeshCommunication().

1101 {
1102  // no MPI == one processor, no need for this method...
1103  return;
1104 }
void libMesh::MeshCommunication::check_for_duplicate_global_indices ( MeshBase mesh) const

Throw an error if we have any index clashes in the numbering used by assign_global_indices.

Definition at line 578 of file mesh_communication_global_indices.C.

References libMesh::MeshTools::create_nodal_bounding_box(), libMesh::err, libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), and libMesh::Threads::parallel_for().

Referenced by allgather().

579 {
580  LOG_SCOPE ("check_for_duplicate_global_indices()", "MeshCommunication");
581 
582  // Global bounding box. We choose the nodal bounding box for
583  // backwards compatibility; the element bounding box may be looser
584  // on curved elements.
585  BoundingBox bbox =
587 
588 
589  std::vector<Parallel::DofObjectKey>
590  node_keys, elem_keys;
591 
592  {
593  // Nodes first
594  {
596  mesh.local_nodes_end());
597  node_keys.resize (nr.size());
598  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
599 
600  // It's O(N^2) to check that these keys don't duplicate before the
601  // sort...
603  for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
604  {
606  for (std::size_t j = 0; j != i; ++j, ++nodej)
607  {
608  if (node_keys[i] == node_keys[j])
609  {
610  CFixBitVec icoords[3], jcoords[3];
611  get_hilbert_coords(**nodej, bbox, jcoords);
612  libMesh::err <<
613  "node " << (*nodej)->id() << ", " <<
614  *(Point *)(*nodej) << " has HilbertIndices " <<
615  node_keys[j] << std::endl;
616  get_hilbert_coords(**nodei, bbox, icoords);
617  libMesh::err <<
618  "node " << (*nodei)->id() << ", " <<
619  *(Point *)(*nodei) << " has HilbertIndices " <<
620  node_keys[i] << std::endl;
621  libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
622  }
623  }
624  }
625  }
626 
627  // Elements next
628  {
630  mesh.local_elements_end());
631  elem_keys.resize (er.size());
632  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
633 
634  // For elements, the keys can be (and in the case of TRI, are
635  // expected to be) duplicates, but only if the elements are at
636  // different levels
638  for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
639  {
641  for (std::size_t j = 0; j != i; ++j, ++elemj)
642  {
643  if ((elem_keys[i] == elem_keys[j]) &&
644  ((*elemi)->level() == (*elemj)->level()))
645  {
646  libMesh::err <<
647  "level " << (*elemj)->level() << " elem\n" <<
648  (**elemj) << " centroid " <<
649  (*elemj)->centroid() << " has HilbertIndices " <<
650  elem_keys[j] << " or " <<
651  get_hilbert_index((*elemj), bbox) <<
652  std::endl;
653  libMesh::err <<
654  "level " << (*elemi)->level() << " elem\n" <<
655  (**elemi) << " centroid " <<
656  (*elemi)->centroid() << " has HilbertIndices " <<
657  elem_keys[i] << " or " <<
658  get_hilbert_index((*elemi), bbox) <<
659  std::endl;
660  libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
661  }
662  }
663  }
664  }
665  } // done checking Hilbert keys
666 }
void parallel_for(const Range &range, const Body &body)
Definition: threads_none.h:73
virtual element_iterator local_elements_begin()=0
Utility class for defining generic ranges for threading.
Definition: stored_range.h:52
virtual node_iterator local_nodes_end()=0
virtual element_iterator local_elements_end()=0
OStreamProxy err(std::cerr)
virtual node_iterator local_nodes_begin()=0
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:366
A geometric point in (x,y,z) space.
Definition: point.h:38
void libMesh::MeshCommunication::clear ( )

Clears all data structures and resets to a pristine state.

Definition at line 316 of file mesh_communication.C.

Referenced by ~MeshCommunication().

317 {
318  // _neighboring_processors.clear();
319 }
void libMesh::MeshCommunication::delete_remote_elements ( DistributedMesh mesh,
const std::set< Elem * > &  extra_ghost_elem_ids 
) const

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor deletes all elements which are neither local elements nor "ghost" elements which touch local elements, and deletes all nodes which are not contained in local or ghost elements. The end result is that a previously serial DistributedMesh will be distributed between processors. Since this method is collective it must be called by all processors.

The std::set is a list of extra elements that you don't want to delete. These will be left on the current processor along with local elements and ghosted neighbors.

Definition at line 1794 of file mesh_communication.C.

References libMesh::Elem::active_family_tree(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_families(), libMesh::DistributedMesh::delete_elem(), libMesh::DistributedMesh::delete_node(), libMesh::GhostingFunctor::delete_remote_elements(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DistributedMesh::is_serial(), libMesh::DistributedMesh::level_elements_begin(), libMesh::DistributedMesh::level_elements_end(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libMesh::Elem::make_links_to_me_remote(), libMesh::DistributedMesh::max_elem_id(), libMesh::DistributedMesh::max_node_id(), libMesh::MeshTools::n_levels(), libMesh::DistributedMesh::nodes_begin(), libMesh::DistributedMesh::nodes_end(), libMesh::DistributedMesh::parallel_max_elem_id(), libMesh::DistributedMesh::parallel_max_node_id(), libMesh::ParallelObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::reconnect_nodes(), libMesh::BoundaryInfo::regenerate_id_sets(), libMesh::Elem::subactive(), and libMesh::Parallel::Communicator::verify().

Referenced by allgather(), and libMesh::DistributedMesh::delete_remote_elements().

1796 {
1797  // The mesh should know it's about to be parallelized
1798  libmesh_assert (!mesh.is_serial());
1799 
1800  LOG_SCOPE("delete_remote_elements()", "MeshCommunication");
1801 
1802 #ifdef DEBUG
1803  // We expect maximum ids to be in sync so we can use them to size
1804  // vectors
1805  libmesh_assert(mesh.comm().verify(mesh.max_node_id()));
1806  libmesh_assert(mesh.comm().verify(mesh.max_elem_id()));
1807  const dof_id_type par_max_node_id = mesh.parallel_max_node_id();
1808  const dof_id_type par_max_elem_id = mesh.parallel_max_elem_id();
1809  libmesh_assert_equal_to (par_max_node_id, mesh.max_node_id());
1810  libmesh_assert_equal_to (par_max_elem_id, mesh.max_elem_id());
1811 #endif
1812 
1813  std::set<const Elem *, CompareElemIdsByLevel> elements_to_keep;
1814 
1815  // Don't delete elements that we were explicitly told not to
1816  for (std::set<Elem *>::iterator it = extra_ghost_elem_ids.begin();
1817  it != extra_ghost_elem_ids.end(); ++it)
1818  {
1819  const Elem * elem = *it;
1820 
1821  std::vector<const Elem *> active_family;
1822 #ifdef LIBMESH_ENABLE_AMR
1823  if (!elem->subactive())
1824  elem->active_family_tree(active_family);
1825  else
1826 #endif
1827  active_family.push_back(elem);
1828 
1829  for (std::size_t i=0; i != active_family.size(); ++i)
1830  elements_to_keep.insert(active_family[i]);
1831  }
1832 
1833  // See which elements we still need to keep ghosted, given that
1834  // we're keeping local and unpartitioned elements.
1836  false, elements_to_keep);
1838  false, elements_to_keep);
1839 
1840  // The inactive elements we need to send should have their
1841  // immediate children present.
1842  connect_children(mesh, mesh.processor_id(), elements_to_keep);
1843  connect_children(mesh, DofObject::invalid_processor_id, elements_to_keep);
1844 
1845  // The elements we need should have their ancestors and their
1846  // subactive children present too.
1847  connect_families(elements_to_keep);
1848 
1849  // Don't delete nodes that our semilocal elements need
1850  std::set<const Node *> connected_nodes;
1851  reconnect_nodes(elements_to_keep, connected_nodes);
1852 
1853  // Delete all the elements we have no reason to save,
1854  // starting with the most refined so that the mesh
1855  // is valid at all intermediate steps
1856  unsigned int n_levels = MeshTools::n_levels(mesh);
1857 
1858  for (int l = n_levels - 1; l >= 0; --l)
1859  {
1860  MeshBase::element_iterator lev_elem_it = mesh.level_elements_begin(l),
1861  lev_end = mesh.level_elements_end(l);
1862  for (; lev_elem_it != lev_end; ++lev_elem_it)
1863  {
1864  Elem * elem = *lev_elem_it;
1865  libmesh_assert (elem);
1866  // Make sure we don't leave any invalid pointers
1867  const bool keep_me = elements_to_keep.count(elem);
1868 
1869  if (!keep_me)
1870  elem->make_links_to_me_remote();
1871 
1872  // delete_elem doesn't currently invalidate element
1873  // iterators... that had better not change
1874  if (!keep_me)
1875  mesh.delete_elem(elem);
1876  }
1877  }
1878 
1879  // Delete all the nodes we have no reason to save
1880  MeshBase::node_iterator node_it = mesh.nodes_begin(),
1881  node_end = mesh.nodes_end();
1882  for (node_it = mesh.nodes_begin(); node_it != node_end; ++node_it)
1883  {
1884  Node * node = *node_it;
1885  libmesh_assert(node);
1886  if (!connected_nodes.count(node))
1887  mesh.delete_node(node);
1888  }
1889 
1890  // If we had a point locator, it's invalid now that some of the
1891  // elements it pointed to have been deleted.
1892  mesh.clear_point_locator();
1893 
1894  // Much of our boundary info may have been for now-remote parts of
1895  // the mesh, in which case we don't want to keep local copies.
1897 
1898  // We now have all remote elements and nodes deleted; our ghosting
1899  // functors should be ready to delete any now-redundant cached data
1900  // they use too.
1901  std::set<GhostingFunctor *>::iterator gf_it = mesh.ghosting_functors_begin();
1902  const std::set<GhostingFunctor *>::iterator gf_end = mesh.ghosting_functors_end();
1903  for (; gf_it != gf_end; ++gf_it)
1904  {
1905  GhostingFunctor *gf = *gf_it;
1906  gf->delete_remote_elements();
1907  }
1908 
1909 #ifdef DEBUG
1911 #endif
1912 }
const BoundaryInfo & get_boundary_info() const
Definition: mesh_base.h:114
A geometric point in (x,y,z) space associated with a DOF.
Definition: node.h:52
bool subactive() const
Definition: elem.h:2077
void connect_children(const MeshBase &mesh, processor_id_type pid, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
void reconnect_nodes(const std::set< const Elem *, CompareElemIdsByLevel > &connected_elements, std::set< const Node * > &connected_nodes)
void libmesh_assert_valid_refinement_tree(const MeshBase &mesh)
Definition: mesh_tools.C:1783
The base class for all geometric element types.
Definition: elem.h:86
dof_id_type parallel_max_elem_id() const
void make_links_to_me_remote()
Definition: elem.C:1283
void active_family_tree(std::vector< const Elem * > &active_family, const bool reset=true) const
Definition: elem.C:1717
virtual node_iterator nodes_end() libmesh_override
virtual element_iterator level_elements_end(unsigned int level) libmesh_override
libmesh_assert(j)
std::set< GhostingFunctor * >::const_iterator ghosting_functors_end() const
Definition: mesh_base.h:789
dof_id_type parallel_max_node_id() const
virtual element_iterator level_elements_begin(unsigned int level) libmesh_override
static const processor_id_type invalid_processor_id
Definition: dof_object.h:345
void connect_families(std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:621
void clear_point_locator()
Definition: mesh_base.C:552
virtual dof_id_type max_node_id() const libmesh_override
virtual bool is_serial() const libmesh_override
virtual node_iterator nodes_begin() libmesh_override
std::set< GhostingFunctor * >::const_iterator ghosting_functors_begin() const
Definition: mesh_base.h:783
const Parallel::Communicator & comm() const
virtual void delete_remote_elements()
void query_ghosting_functors(const MeshBase &mesh, processor_id_type pid, bool newly_coarsened_only, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
virtual void delete_elem(Elem *e) libmesh_override
virtual dof_id_type max_elem_id() const libmesh_override
processor_id_type processor_id() const
uint8_t dof_id_type
Definition: id_types.h:64
virtual void delete_node(Node *n) libmesh_override
template<typename ForwardIterator >
void libMesh::MeshCommunication::find_global_indices ( const Parallel::Communicator communicator,
const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::vector< dof_id_type > &  index_map 
) const

This method determines a globally unique, partition-agnostic index for each object in the input range.

Definition at line 675 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Communicator::allgather(), libMesh::Parallel::Sort< KeyType, IdxType >::bin(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), std::max(), libMesh::Parallel::Communicator::rank(), libMesh::Real, libMesh::Parallel::Communicator::send_receive(), libMesh::Parallel::Communicator::size(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by allgather(), libMesh::ParmetisPartitioner::initialize(), libMesh::MetisPartitioner::partition_range(), and libMesh::Partitioner::partition_unpartitioned_elements().

680 {
681  LOG_SCOPE ("find_global_indices()", "MeshCommunication");
682 
683  // This method determines partition-agnostic global indices
684  // for nodes and elements.
685 
686  // Algorithm:
687  // (1) compute the Hilbert key for each local node/element
688  // (2) perform a parallel sort of the Hilbert key
689  // (3) get the min/max value on each processor
690  // (4) determine the position in the global ranking for
691  // each local object
692  index_map.clear();
693  std::size_t n_objects = std::distance (begin, end);
694  index_map.reserve(n_objects);
695 
696  //-------------------------------------------------------------
697  // (1) compute Hilbert keys
698  // These aren't trivial to compute, and we will need them again.
699  // But the binsort will sort the input vector, trashing the order
700  // that we'd like to rely on. So, two vectors...
701  std::vector<Parallel::DofObjectKey>
702  sorted_hilbert_keys,
703  hilbert_keys;
704  sorted_hilbert_keys.reserve(n_objects);
705  hilbert_keys.reserve(n_objects);
706  {
707  LOG_SCOPE("compute_hilbert_indices()", "MeshCommunication");
708  for (ForwardIterator it=begin; it!=end; ++it)
709  {
710  const Parallel::DofObjectKey hi(get_hilbert_index (*it, bbox));
711  hilbert_keys.push_back(hi);
712 
713  if ((*it)->processor_id() == communicator.rank())
714  sorted_hilbert_keys.push_back(hi);
715 
716  // someone needs to take care of unpartitioned objects!
717  if ((communicator.rank() == 0) &&
718  ((*it)->processor_id() == DofObject::invalid_processor_id))
719  sorted_hilbert_keys.push_back(hi);
720  }
721  }
722 
723  //-------------------------------------------------------------
724  // (2) parallel sort the Hilbert keys
725  START_LOG ("parallel_sort()", "MeshCommunication");
726  Parallel::Sort<Parallel::DofObjectKey> sorter (communicator,
727  sorted_hilbert_keys);
728  sorter.sort();
729  STOP_LOG ("parallel_sort()", "MeshCommunication");
730  const std::vector<Parallel::DofObjectKey> & my_bin = sorter.bin();
731 
732  // The number of objects in my_bin on each processor
733  std::vector<unsigned int> bin_sizes(communicator.size());
734  communicator.allgather (static_cast<unsigned int>(my_bin.size()), bin_sizes);
735 
736  // The offset of my first global index
737  unsigned int my_offset = 0;
738  for (unsigned int pid=0; pid<communicator.rank(); pid++)
739  my_offset += bin_sizes[pid];
740 
741  //-------------------------------------------------------------
742  // (3) get the max value on each processor
743  std::vector<Parallel::DofObjectKey>
744  upper_bounds(1);
745 
746  if (!my_bin.empty())
747  upper_bounds[0] = my_bin.back();
748 
749  communicator.allgather (upper_bounds, /* identical_buffer_sizes = */ true);
750 
751  // Be careful here. The *_upper_bounds will be used to find the processor
752  // a given object belongs to. So, if a processor contains no objects (possible!)
753  // then copy the bound from the lower processor id.
754  for (unsigned int p=1; p<communicator.size(); p++)
755  if (!bin_sizes[p]) upper_bounds[p] = upper_bounds[p-1];
756 
757 
758  //-------------------------------------------------------------
759  // (4) determine the position in the global ranking for
760  // each local object
761  {
762  //----------------------------------------------
763  // all objects, not just local ones
764 
765  // Request sets to send to each processor
766  std::vector<std::vector<Parallel::DofObjectKey> >
767  requested_ids (communicator.size());
768  // Results to gather from each processor
769  std::vector<std::vector<dof_id_type> >
770  filled_request (communicator.size());
771 
772  // build up list of requests
773  std::vector<Parallel::DofObjectKey>::const_iterator hi =
774  hilbert_keys.begin();
775 
776  for (ForwardIterator it = begin; it != end; ++it)
777  {
778  libmesh_assert (hi != hilbert_keys.end());
779 
780  std::vector<Parallel::DofObjectKey>::iterator lb =
781  std::lower_bound(upper_bounds.begin(), upper_bounds.end(),
782  *hi);
783 
784  const processor_id_type pid =
785  cast_int<processor_id_type>
786  (std::distance (upper_bounds.begin(), lb));
787 
788  libmesh_assert_less (pid, communicator.size());
789 
790  requested_ids[pid].push_back(*hi);
791 
792  ++hi;
793  // go ahead and put pid in index_map, that way we
794  // don't have to repeat the std::lower_bound()
795  index_map.push_back(pid);
796  }
797 
798  // start with pid=0, so that we will trade with ourself
799  std::vector<Parallel::DofObjectKey> request_to_fill;
800  std::vector<dof_id_type> global_ids;
801  for (processor_id_type pid=0; pid<communicator.size(); pid++)
802  {
803  // Trade my requests with processor procup and procdown
804  const processor_id_type procup = cast_int<processor_id_type>
805  ((communicator.rank() + pid) % communicator.size());
806  const processor_id_type procdown = cast_int<processor_id_type>
807  ((communicator.size() + communicator.rank() - pid) %
808  communicator.size());
809 
810  communicator.send_receive(procup, requested_ids[procup],
811  procdown, request_to_fill);
812 
813  // Fill the requests
814  global_ids.clear(); global_ids.reserve(request_to_fill.size());
815  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
816  {
817  const Parallel::DofObjectKey & hilbert_indices = request_to_fill[idx];
818  libmesh_assert_less_equal (hilbert_indices, upper_bounds[communicator.rank()]);
819 
820  // find the requested index in my node bin
821  std::vector<Parallel::DofObjectKey>::const_iterator pos =
822  std::lower_bound (my_bin.begin(), my_bin.end(), hilbert_indices);
823  libmesh_assert (pos != my_bin.end());
824 #ifdef DEBUG
825  // If we could not find the requested Hilbert index in
826  // my_bin, something went terribly wrong, possibly the
827  // Mesh was displaced differently on different processors,
828  // and therefore the Hilbert indices don't agree.
829  if (*pos != hilbert_indices)
830  {
831  // The input will be hilbert_indices. We convert it
832  // to BitVecType using the operator= provided by the
833  // BitVecType class. BitVecType is a CBigBitVec!
834  Hilbert::BitVecType input;
835 #ifdef LIBMESH_ENABLE_UNIQUE_ID
836  input = hilbert_indices.first;
837 #else
838  input = hilbert_indices;
839 #endif
840 
841  // Get output in a vector of CBigBitVec
842  std::vector<CBigBitVec> output(3);
843 
844  // Call the indexToCoords function
845  Hilbert::indexToCoords(&output[0], 8*sizeof(Hilbert::inttype), 3, input);
846 
847  // The entries in the output racks are integers in the
848  // range [0, Hilbert::inttype::max] which can be
849  // converted to floating point values in [0,1] and
850  // finally to actual values using the bounding box.
851  Real max_int_as_real = static_cast<Real>(std::numeric_limits<Hilbert::inttype>::max());
852 
853  // Get the points in [0,1]^3. The zeroth rack of each entry in
854  // 'output' maps to the normalized x, y, and z locations,
855  // respectively.
856  Point p_hat(static_cast<Real>(output[0].racks()[0]) / max_int_as_real,
857  static_cast<Real>(output[1].racks()[0]) / max_int_as_real,
858  static_cast<Real>(output[2].racks()[0]) / max_int_as_real);
859 
860  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
861  Real
862  xmin = bbox.first(0),
863  xmax = bbox.second(0),
864  ymin = bbox.first(1),
865  ymax = bbox.second(1),
866  zmin = bbox.first(2),
867  zmax = bbox.second(2);
868 
869  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
870  Point p(xmin + (xmax-xmin)*p_hat(0),
871  ymin + (ymax-ymin)*p_hat(1),
872  zmin + (zmax-zmin)*p_hat(2));
873 
874  libmesh_error_msg("Could not find hilbert indices: "
875  << hilbert_indices
876  << " corresponding to point " << p);
877  }
878 #endif
879 
880  // Finally, assign the global index based off the position of the index
881  // in my array, properly offset.
882  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_bin.begin(), pos) + my_offset));
883  }
884 
885  // and trade back
886  communicator.send_receive (procdown, global_ids,
887  procup, filled_request[procup]);
888  }
889 
890  // We now have all the filled requests, so we can loop through our
891  // nodes once and assign the global index to each one.
892  {
893  std::vector<std::vector<dof_id_type>::const_iterator>
894  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
895  for (unsigned int pid=0; pid<communicator.size(); pid++)
896  next_obj_on_proc.push_back(filled_request[pid].begin());
897 
898  unsigned int cnt=0;
899  for (ForwardIterator it = begin; it != end; ++it, cnt++)
900  {
901  const processor_id_type pid = cast_int<processor_id_type>
902  (index_map[cnt]);
903 
904  libmesh_assert_less (pid, communicator.size());
905  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
906 
907  const dof_id_type global_index = *next_obj_on_proc[pid];
908  index_map[cnt] = global_index;
909 
910  ++next_obj_on_proc[pid];
911  }
912  }
913  }
914 
915  libmesh_assert_equal_to(index_map.size(), n_objects);
916 }
unsigned int size() const
Definition: parallel.h:722
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
long double max(long double a, double b)
libmesh_assert(j)
void send_receive(const unsigned int dest_processor_id, const T1 &send, const unsigned int source_processor_id, T2 &recv, const MessageTag &send_tag=no_tag, const MessageTag &recv_tag=any_tag) const
static const processor_id_type invalid_processor_id
Definition: dof_object.h:345
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real
Object for performing parallel sorts using MPI.
Definition: parallel_sort.h:53
unsigned int rank() const
Definition: parallel.h:720
A geometric point in (x,y,z) space.
Definition: point.h:38
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
uint8_t dof_id_type
Definition: id_types.h:64
void allgather(const T &send, std::vector< T > &recv) const
void libMesh::MeshCommunication::gather ( const processor_id_type  root_id,
DistributedMesh mesh 
) const

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor then sends its local nodes and elements to processor root_id. The end result is that a previously distributed DistributedMesh will be serialized on processor root_id. Since this method is collective it must be called by all processors. For the special case of root_id equal to DofObject::invalid_processor_id this function performs an allgather.

Definition at line 1172 of file mesh_communication.C.

References libMesh::Parallel::Communicator::allgather_packed_range(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::UnstructuredMesh::find_neighbors(), libMesh::Parallel::gather(), libMesh::Parallel::Communicator::gather_packed_range(), libMesh::DistributedMesh::level_elements_begin(), libMesh::DistributedMesh::level_elements_end(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_boundary_ids(), mesh, libMesh::DistributedMesh::n_elem(), libMesh::MeshTools::n_levels(), libMesh::DistributedMesh::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::DistributedMesh::nodes_begin(), libMesh::DistributedMesh::nodes_end(), libMesh::ParallelObject::processor_id(), renumber, and libMesh::Parallel::Communicator::verify().

Referenced by allgather(), libMesh::DistributedMesh::gather_to_zero(), and ~MeshCommunication().

1173 {
1174  // no MPI == one processor, no need for this method...
1175  return;
1176 }
void libMesh::MeshCommunication::gather_neighboring_elements ( DistributedMesh mesh) const
void libMesh::MeshCommunication::make_elems_parallel_consistent ( MeshBase mesh)

Copy ids of ghost elements from their local processors.

Definition at line 1497 of file mesh_communication.C.

References libMesh::MeshBase::active_elements_begin(), libMesh::MeshBase::active_elements_end(), libMesh::ParallelObject::comm(), libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Parallel::sync_element_data_by_parent_id().

Referenced by libMesh::MeshRefinement::_refine_elements(), and allgather().

1498 {
1499  // This function must be run on all processors at once
1500  libmesh_parallel_only(mesh.comm());
1501 
1502  LOG_SCOPE ("make_elems_parallel_consistent()", "MeshCommunication");
1503 
1504  SyncIds syncids(mesh, &MeshBase::renumber_elem);
1506  (mesh, mesh.active_elements_begin(),
1507  mesh.active_elements_end(), syncids);
1508 
1509 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1510  SyncUniqueIds<Elem> syncuniqueids(mesh, &MeshBase::query_elem_ptr);
1512  (mesh.comm(), mesh.active_elements_begin(),
1513  mesh.active_elements_end(), syncuniqueids);
1514 #endif
1515 }
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
virtual element_iterator active_elements_begin()=0
virtual element_iterator active_elements_end()=0
virtual const Elem * query_elem_ptr(const dof_id_type i) const =0
virtual void renumber_elem(dof_id_type old_id, dof_id_type new_id)=0
const Parallel::Communicator & comm() const
void sync_element_data_by_parent_id(MeshBase &mesh, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
void libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.

Definition at line 1683 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1684 {
1685  LOG_SCOPE ("make_new_node_proc_ids_parallel_consistent()", "MeshCommunication");
1686 
1687  // This function must be run on all processors at once
1688  libmesh_parallel_only(mesh.comm());
1689 
1690  // When this function is called, each section of a parallelized mesh
1691  // should be in the following state:
1692  //
1693  // Local nodes should have unique authoritative ids,
1694  // and processor ids consistent with all processors which own
1695  // an element touching them.
1696  //
1697  // Ghost nodes touching local elements should have processor ids
1698  // consistent with all processors which own an element touching
1699  // them.
1700  SyncProcIds sync(mesh);
1702  (mesh, mesh.elements_begin(), mesh.elements_end(),
1703  ElemNodesMaybeNew(), NodeMaybeNew(), sync);
1704 }
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_new_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on new nodes from their local processors.

Definition at line 1751 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), and libMesh::MeshTools::correct_node_proc_ids().

Referenced by libMesh::MeshRefinement::_refine_elements(), and allgather().

1752 {
1753  // This function must be run on all processors at once
1754  libmesh_parallel_only(mesh.comm());
1755 
1756  // When this function is called, each section of a parallelized mesh
1757  // should be in the following state:
1758  //
1759  // All nodes should have the exact same physical location on every
1760  // processor where they exist.
1761  //
1762  // Local nodes should have unique authoritative ids,
1763  // and processor ids consistent with all processors which own
1764  // an element touching them.
1765  //
1766  // Ghost nodes touching local elements should have processor ids
1767  // consistent with all processors which own an element touching
1768  // them.
1769  //
1770  // Ghost nodes should have ids which are either already correct
1771  // or which are in the "unpartitioned" id space.
1772 
1773  // First, let's sync up processor ids. Some of these processor ids
1774  // may be "wrong" from coarsening, but they're right in the sense
1775  // that they'll tell us who has the authoritative dofobject ids for
1776  // each node.
1777 
1779 
1780  // Second, sync up dofobject ids.
1782 
1783  // Third, sync up dofobject unique_ids if applicable.
1785 
1786  // Finally, correct the processor ids to make DofMap happy
1788 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
void correct_node_proc_ids(MeshBase &)
Definition: mesh_tools.C:1947
void make_new_node_proc_ids_parallel_consistent(MeshBase &)
void make_node_ids_parallel_consistent(MeshBase &)
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_ids_parallel_consistent ( MeshBase mesh)

Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent.

Definition at line 1458 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1459 {
1460  // This function must be run on all processors at once
1461  libmesh_parallel_only(mesh.comm());
1462 
1463  LOG_SCOPE ("make_node_ids_parallel_consistent()", "MeshCommunication");
1464 
1465  SyncNodeIds syncids(mesh);
1467  (mesh, mesh.elements_begin(), mesh.elements_end(),
1469 }
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

Definition at line 1654 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1655 {
1656  LOG_SCOPE ("make_node_proc_ids_parallel_consistent()", "MeshCommunication");
1657 
1658  // This function must be run on all processors at once
1659  libmesh_parallel_only(mesh.comm());
1660 
1661  // When this function is called, each section of a parallelized mesh
1662  // should be in the following state:
1663  //
1664  // All nodes should have the exact same physical location on every
1665  // processor where they exist.
1666  //
1667  // Local nodes should have unique authoritative ids,
1668  // and processor ids consistent with all processors which own
1669  // an element touching them.
1670  //
1671  // Ghost nodes touching local elements should have processor ids
1672  // consistent with all processors which own an element touching
1673  // them.
1674  SyncProcIds sync(mesh);
1676  (mesh, mesh.elements_begin(), mesh.elements_end(),
1678 }
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent ( MeshBase mesh)

Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent.

Definition at line 1473 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::BoundaryInfo::add_elements(), allgather(), and libMesh::Nemesis_IO::read().

1474 {
1475  // Avoid unused variable warnings if unique ids aren't enabled.
1476  libmesh_ignore(mesh);
1477 
1478  // This function must be run on all processors at once
1479  libmesh_parallel_only(mesh.comm());
1480 
1481 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1482  LOG_SCOPE ("make_node_unique_ids_parallel_consistent()", "MeshCommunication");
1483 
1484  SyncUniqueIds<Node> syncuniqueids(mesh, &MeshBase::query_node_ptr);
1486  mesh.nodes_begin(),
1487  mesh.nodes_end(),
1488  syncuniqueids);
1489 
1490 #endif
1491 }
virtual node_iterator nodes_begin()=0
virtual const Node * query_node_ptr(const dof_id_type i) const =0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
void libmesh_ignore(const T &)
virtual node_iterator nodes_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on ghost nodes from their local processors. This is useful for code which wants to add nodes to a distributed mesh.

Definition at line 1709 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), and libMesh::MeshTools::correct_node_proc_ids().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), libMesh::UnstructuredMesh::all_second_order(), libMesh::MeshTools::Modification::all_tri(), and allgather().

1710 {
1711  // This function must be run on all processors at once
1712  libmesh_parallel_only(mesh.comm());
1713 
1714  // When this function is called, each section of a parallelized mesh
1715  // should be in the following state:
1716  //
1717  // All nodes should have the exact same physical location on every
1718  // processor where they exist.
1719  //
1720  // Local nodes should have unique authoritative ids,
1721  // and processor ids consistent with all processors which own
1722  // an element touching them.
1723  //
1724  // Ghost nodes touching local elements should have processor ids
1725  // consistent with all processors which own an element touching
1726  // them.
1727  //
1728  // Ghost nodes should have ids which are either already correct
1729  // or which are in the "unpartitioned" id space.
1730 
1731  // First, let's sync up processor ids. Some of these processor ids
1732  // may be "wrong" from coarsening, but they're right in the sense
1733  // that they'll tell us who has the authoritative dofobject ids for
1734  // each node.
1735 
1737 
1738  // Second, sync up dofobject ids.
1740 
1741  // Third, sync up dofobject unique_ids if applicable.
1743 
1744  // Finally, correct the processor ids to make DofMap happy
1746 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
void correct_node_proc_ids(MeshBase &)
Definition: mesh_tools.C:1947
void make_node_ids_parallel_consistent(MeshBase &)
const Parallel::Communicator & comm() const
void make_node_proc_ids_parallel_consistent(MeshBase &)
void libMesh::MeshCommunication::make_p_levels_parallel_consistent ( MeshBase mesh)

Copy p levels of ghost elements from their local processors.

Definition at line 1521 of file mesh_communication.C.

References libMesh::Elem::as_parent_node(), libMesh::ParallelObject::comm(), data, libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), libMesh::invalid_uint, libMesh::Elem::is_node_on_side(), mesh, libMesh::Elem::n_neighbors(), libMesh::Elem::neighbor(), libMesh::MeshBase::node_ref(), libMesh::Elem::parent(), libMesh::DofObject::processor_id(), libMesh::Elem::refinement_flag(), libMesh::remote_elem, libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Elem::which_child_am_i().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), libMesh::MeshRefinement::_refine_elements(), and allgather().

1522 {
1523  // This function must be run on all processors at once
1524  libmesh_parallel_only(mesh.comm());
1525 
1526  LOG_SCOPE ("make_p_levels_parallel_consistent()", "MeshCommunication");
1527 
1528  SyncPLevels syncplevels(mesh);
1530  (mesh.comm(), mesh.elements_begin(), mesh.elements_end(),
1531  syncplevels);
1532 }
virtual element_iterator elements_begin()=0
virtual element_iterator elements_end()=0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::redistribute ( DistributedMesh mesh,
bool  newly_coarsened_only = false 
) const

This method takes a parallel distributed mesh and redistributes the elements. Specifically, any elements stored on a given processor are sent to the processor which "owns" them. Similarly, any elements assigned to the current processor but stored on another are received. Once this step is completed any required ghost elements are updated. The final result is that each processor stores only the elements it actually owns and any ghost elements required to satisfy data dependencies. This method can be invoked after a partitioning step to affect the new partitioning.

Redistribution can also be done with newly coarsened elements' neighbors only.

Definition at line 325 of file mesh_communication.C.

References libMesh::Parallel::Communicator::alltoall(), libMesh::Parallel::any_source, libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_families(), libMesh::Parallel::Communicator::get_unique_tag(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DistributedMesh::is_serial(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_equal_n_systems(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libmesh_nullptr, libMesh::MeshTools::n_elem(), libMesh::ParallelObject::n_processors(), libMesh::ParallelObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::Parallel::Communicator::receive_packed_range(), libMesh::reconnect_nodes(), libMesh::MeshTools::Modification::redistribute(), libMesh::GhostingFunctor::redistribute(), libMesh::Parallel::Communicator::send_packed_range(), libMesh::DistributedMesh::unpartitioned_elements_begin(), libMesh::DistributedMesh::unpartitioned_elements_end(), and libMesh::Parallel::wait().

Referenced by libMesh::DistributedMesh::redistribute(), and ~MeshCommunication().

326 {
327  // no MPI == one processor, no redistribution
328  return;
329 }

The documentation for this class was generated from the following files: