libMesh::MeshCommunication Class Reference

#include <mesh_communication.h>

Public Member Functions

 MeshCommunication ()
 
 ~MeshCommunication ()
 
void clear ()
 
void broadcast (MeshBase &) const
 
void redistribute (DistributedMesh &mesh, bool newly_coarsened_only=false) const
 
void gather_neighboring_elements (DistributedMesh &) const
 
void send_coarse_ghosts (MeshBase &) const
 
void gather (const processor_id_type root_id, DistributedMesh &) const
 
void allgather (DistributedMesh &mesh) const
 
void delete_remote_elements (DistributedMesh &, const std::set< Elem * > &) const
 
void assign_global_indices (MeshBase &) const
 
void check_for_duplicate_global_indices (MeshBase &) const
 
template<typename ForwardIterator >
void find_global_indices (const Parallel::Communicator &communicator, const MeshTools::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
 
void make_elems_parallel_consistent (MeshBase &)
 
void make_p_levels_parallel_consistent (MeshBase &)
 
void make_node_ids_parallel_consistent (MeshBase &)
 
void make_node_unique_ids_parallel_consistent (MeshBase &)
 
void make_node_proc_ids_parallel_consistent (MeshBase &)
 
void make_new_node_proc_ids_parallel_consistent (MeshBase &)
 
void make_nodes_parallel_consistent (MeshBase &)
 
void make_new_nodes_parallel_consistent (MeshBase &)
 

Detailed Description

This is the MeshCommunication class. It handles all the details of communicating mesh information from one processor to another. All parallelization of the Mesh data structures is done via this class.

Author
Benjamin S. Kirk
Date
2003

Definition at line 46 of file mesh_communication.h.

Constructor & Destructor Documentation

libMesh::MeshCommunication::MeshCommunication ( )
inline

Constructor.

Definition at line 53 of file mesh_communication.h.

53 {}
libMesh::MeshCommunication::~MeshCommunication ( )
inline

Destructor.

Definition at line 58 of file mesh_communication.h.

References broadcast(), clear(), gather(), gather_neighboring_elements(), mesh, redistribute(), and send_coarse_ghosts().

58 {}

Member Function Documentation

void libMesh::MeshCommunication::allgather ( DistributedMesh mesh) const
inline

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor then sends its local nodes and elements to the other processors. The end result is that a previously distributed DistributedMesh will be serialized on each processor. Since this method is collective it must be called by all processors.

Definition at line 131 of file mesh_communication.h.

References assign_global_indices(), check_for_duplicate_global_indices(), delete_remote_elements(), find_global_indices(), gather(), libMesh::DofObject::invalid_processor_id, make_elems_parallel_consistent(), make_new_node_proc_ids_parallel_consistent(), make_new_nodes_parallel_consistent(), make_node_ids_parallel_consistent(), make_node_proc_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), make_nodes_parallel_consistent(), and make_p_levels_parallel_consistent().

Referenced by libMesh::DistributedMesh::allgather().

MeshBase & mesh
void gather(const processor_id_type root_id, DistributedMesh &) const
static const processor_id_type invalid_processor_id
Definition: dof_object.h:346
void libMesh::MeshCommunication::assign_global_indices ( MeshBase mesh) const

This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh. The approach is to compute the Hilbert space-filling curve key and use its value to assign an index in [0,N_global). Since the Hilbert key is unique for each spatial location, two objects occupying the same location will be assigned the same global id. Thus, this method can also be useful for identifying duplicate nodes which may occur during parallel refinement.

Definition at line 169 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), libMesh::MeshTools::bounding_box(), libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::libmesh_assert(), libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), libMesh::MeshBase::n_elem(), libMesh::MeshBase::n_nodes(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::Threads::parallel_for(), libMesh::DofObject::set_id(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by allgather(), and libMesh::MeshTools::Private::globally_renumber_nodes_and_elements().

170 {
171  LOG_SCOPE ("assign_global_indices()", "MeshCommunication");
172 
173  // This method determines partition-agnostic global indices
174  // for nodes and elements.
175 
176  // Algorithm:
177  // (1) compute the Hilbert key for each local node/element
178  // (2) perform a parallel sort of the Hilbert key
179  // (3) get the min/max value on each processor
180  // (4) determine the position in the global ranking for
181  // each local object
182 
183  const Parallel::Communicator & communicator (mesh.comm());
184 
185  // Global bounding box
186  BoundingBox bbox =
188 
189  //-------------------------------------------------------------
190  // (1) compute Hilbert keys
191  std::vector<Parallel::DofObjectKey>
192  node_keys, elem_keys;
193 
194  {
195  // Nodes first
196  {
198  mesh.local_nodes_end());
199  node_keys.resize (nr.size());
200  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
201 
202  // // It's O(N^2) to check that these keys don't duplicate before the
203  // // sort...
204  // MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
205  // for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
206  // {
207  // MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
208  // for (std::size_t j = 0; j != i; ++j, ++nodej)
209  // {
210  // if (node_keys[i] == node_keys[j])
211  // {
212  // CFixBitVec icoords[3], jcoords[3];
213  // get_hilbert_coords(**nodej, bbox, jcoords);
214  // libMesh::err <<
215  // "node " << (*nodej)->id() << ", " <<
216  // *(Point *)(*nodej) << " has HilbertIndices " <<
217  // node_keys[j] << std::endl;
218  // get_hilbert_coords(**nodei, bbox, icoords);
219  // libMesh::err <<
220  // "node " << (*nodei)->id() << ", " <<
221  // *(Point *)(*nodei) << " has HilbertIndices " <<
222  // node_keys[i] << std::endl;
223  // libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
224  // }
225  // }
226  // }
227  }
228 
229  // Elements next
230  {
232  mesh.local_elements_end());
233  elem_keys.resize (er.size());
234  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
235 
236  // // For elements, the keys can be (and in the case of TRI, are
237  // // expected to be) duplicates, but only if the elements are at
238  // // different levels
239  // MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
240  // for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
241  // {
242  // MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
243  // for (std::size_t j = 0; j != i; ++j, ++elemj)
244  // {
245  // if ((elem_keys[i] == elem_keys[j]) &&
246  // ((*elemi)->level() == (*elemj)->level()))
247  // {
248  // libMesh::err <<
249  // "level " << (*elemj)->level() << " elem\n" <<
250  // (**elemj) << " centroid " <<
251  // (*elemj)->centroid() << " has HilbertIndices " <<
252  // elem_keys[j] << " or " <<
253  // get_hilbert_index((*elemj), bbox) <<
254  // std::endl;
255  // libMesh::err <<
256  // "level " << (*elemi)->level() << " elem\n" <<
257  // (**elemi) << " centroid " <<
258  // (*elemi)->centroid() << " has HilbertIndices " <<
259  // elem_keys[i] << " or " <<
260  // get_hilbert_index((*elemi), bbox) <<
261  // std::endl;
262  // libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
263  // }
264  // }
265  // }
266  }
267  } // done computing Hilbert keys
268 
269 
270 
271  //-------------------------------------------------------------
272  // (2) parallel sort the Hilbert keys
274  node_keys);
275  node_sorter.sort(); /* done with node_keys */ //node_keys.clear();
276 
277  const std::vector<Parallel::DofObjectKey> & my_node_bin =
278  node_sorter.bin();
279 
281  elem_keys);
282  elem_sorter.sort(); /* done with elem_keys */ //elem_keys.clear();
283 
284  const std::vector<Parallel::DofObjectKey> & my_elem_bin =
285  elem_sorter.bin();
286 
287 
288 
289  //-------------------------------------------------------------
290  // (3) get the max value on each processor
291  std::vector<Parallel::DofObjectKey>
292  node_upper_bounds(communicator.size()),
293  elem_upper_bounds(communicator.size());
294 
295  { // limit scope of temporaries
296  std::vector<Parallel::DofObjectKey> recvbuf(2*communicator.size());
297  std::vector<unsigned short int> /* do not use a vector of bools here since it is not always so! */
298  empty_nodes (communicator.size()),
299  empty_elem (communicator.size());
300  std::vector<Parallel::DofObjectKey> my_max(2);
301 
302  communicator.allgather (static_cast<unsigned short int>(my_node_bin.empty()), empty_nodes);
303  communicator.allgather (static_cast<unsigned short int>(my_elem_bin.empty()), empty_elem);
304 
305  if (!my_node_bin.empty()) my_max[0] = my_node_bin.back();
306  if (!my_elem_bin.empty()) my_max[1] = my_elem_bin.back();
307 
308  communicator.allgather (my_max, /* identical_buffer_sizes = */ true);
309 
310  // Be cereful here. The *_upper_bounds will be used to find the processor
311  // a given object belongs to. So, if a processor contains no objects (possible!)
312  // then copy the bound from the lower processor id.
313  for (processor_id_type p=0; p<communicator.size(); p++)
314  {
315  node_upper_bounds[p] = my_max[2*p+0];
316  elem_upper_bounds[p] = my_max[2*p+1];
317 
318  if (p > 0) // default hilbert index value is the OK upper bound for processor 0.
319  {
320  if (empty_nodes[p]) node_upper_bounds[p] = node_upper_bounds[p-1];
321  if (empty_elem[p]) elem_upper_bounds[p] = elem_upper_bounds[p-1];
322  }
323  }
324  }
325 
326 
327 
328  //-------------------------------------------------------------
329  // (4) determine the position in the global ranking for
330  // each local object
331  {
332  //----------------------------------------------
333  // Nodes first -- all nodes, not just local ones
334  {
335  // Request sets to send to each processor
336  std::vector<std::vector<Parallel::DofObjectKey> >
337  requested_ids (communicator.size());
338  // Results to gather from each processor
339  std::vector<std::vector<dof_id_type> >
340  filled_request (communicator.size());
341 
342  {
345 
346  // build up list of requests
347  for (; it != end; ++it)
348  {
349  const Node * node = (*it);
350  libmesh_assert(node);
351  const Parallel::DofObjectKey hi =
352  get_hilbert_index (node, bbox);
353  const processor_id_type pid =
354  cast_int<processor_id_type>
355  (std::distance (node_upper_bounds.begin(),
356  std::lower_bound(node_upper_bounds.begin(),
357  node_upper_bounds.end(),
358  hi)));
359 
360  libmesh_assert_less (pid, communicator.size());
361 
362  requested_ids[pid].push_back(hi);
363  }
364  }
365 
366  // The number of objects in my_node_bin on each processor
367  std::vector<dof_id_type> node_bin_sizes(communicator.size());
368  communicator.allgather (static_cast<dof_id_type>(my_node_bin.size()), node_bin_sizes);
369 
370  // The offset of my first global index
371  dof_id_type my_offset = 0;
372  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
373  my_offset += node_bin_sizes[pid];
374 
375  // start with pid=0, so that we will trade with ourself
376  for (processor_id_type pid=0; pid<communicator.size(); pid++)
377  {
378  // Trade my requests with processor procup and procdown
379  const processor_id_type procup = cast_int<processor_id_type>
380  ((communicator.rank() + pid) % communicator.size());
381  const processor_id_type procdown = cast_int<processor_id_type>
382  ((communicator.size() + communicator.rank() - pid) %
383  communicator.size());
384 
385  std::vector<Parallel::DofObjectKey> request_to_fill;
386  communicator.send_receive(procup, requested_ids[procup],
387  procdown, request_to_fill);
388 
389  // Fill the requests
390  std::vector<dof_id_type> global_ids; global_ids.reserve(request_to_fill.size());
391  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
392  {
393  const Parallel::DofObjectKey & hi = request_to_fill[idx];
394  libmesh_assert_less_equal (hi, node_upper_bounds[communicator.rank()]);
395 
396  // find the requested index in my node bin
397  std::vector<Parallel::DofObjectKey>::const_iterator pos =
398  std::lower_bound (my_node_bin.begin(), my_node_bin.end(), hi);
399  libmesh_assert (pos != my_node_bin.end());
400  libmesh_assert_equal_to (*pos, hi);
401 
402  // Finally, assign the global index based off the position of the index
403  // in my array, properly offset.
404  global_ids.push_back(cast_int<dof_id_type>(std::distance(my_node_bin.begin(), pos) + my_offset));
405  }
406 
407  // and trade back
408  communicator.send_receive (procdown, global_ids,
409  procup, filled_request[procup]);
410  }
411 
412  // We now have all the filled requests, so we can loop through our
413  // nodes once and assign the global index to each one.
414  {
415  std::vector<std::vector<dof_id_type>::const_iterator>
416  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
417  for (processor_id_type pid=0; pid<communicator.size(); pid++)
418  next_obj_on_proc.push_back(filled_request[pid].begin());
419 
420  {
422  const MeshBase::node_iterator end = mesh.nodes_end();
423 
424  for (; it != end; ++it)
425  {
426  Node * node = (*it);
427  libmesh_assert(node);
428  const Parallel::DofObjectKey hi =
429  get_hilbert_index (node, bbox);
430  const processor_id_type pid =
431  cast_int<processor_id_type>
432  (std::distance (node_upper_bounds.begin(),
433  std::lower_bound(node_upper_bounds.begin(),
434  node_upper_bounds.end(),
435  hi)));
436 
437  libmesh_assert_less (pid, communicator.size());
438  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
439 
440  const dof_id_type global_index = *next_obj_on_proc[pid];
441  libmesh_assert_less (global_index, mesh.n_nodes());
442  node->set_id() = global_index;
443 
444  ++next_obj_on_proc[pid];
445  }
446  }
447  }
448  }
449 
450  //---------------------------------------------------
451  // elements next -- all elements, not just local ones
452  {
453  // Request sets to send to each processor
454  std::vector<std::vector<Parallel::DofObjectKey> >
455  requested_ids (communicator.size());
456  // Results to gather from each processor
457  std::vector<std::vector<dof_id_type> >
458  filled_request (communicator.size());
459 
460  {
463 
464  for (; it != end; ++it)
465  {
466  const Elem * elem = (*it);
467  libmesh_assert(elem);
468  const Parallel::DofObjectKey hi =
469  get_hilbert_index (elem, bbox);
470  const processor_id_type pid =
471  cast_int<processor_id_type>
472  (std::distance (elem_upper_bounds.begin(),
473  std::lower_bound(elem_upper_bounds.begin(),
474  elem_upper_bounds.end(),
475  hi)));
476 
477  libmesh_assert_less (pid, communicator.size());
478 
479  requested_ids[pid].push_back(hi);
480  }
481  }
482 
483  // The number of objects in my_elem_bin on each processor
484  std::vector<dof_id_type> elem_bin_sizes(communicator.size());
485  communicator.allgather (static_cast<dof_id_type>(my_elem_bin.size()), elem_bin_sizes);
486 
487  // The offset of my first global index
488  dof_id_type my_offset = 0;
489  for (processor_id_type pid=0; pid<communicator.rank(); pid++)
490  my_offset += elem_bin_sizes[pid];
491 
492  // start with pid=0, so that we will trade with ourself
493  for (processor_id_type pid=0; pid<communicator.size(); pid++)
494  {
495  // Trade my requests with processor procup and procdown
496  const processor_id_type procup = cast_int<processor_id_type>
497  ((communicator.rank() + pid) % communicator.size());
498  const processor_id_type procdown = cast_int<processor_id_type>
499  ((communicator.size() + communicator.rank() - pid) %
500  communicator.size());
501 
502  std::vector<Parallel::DofObjectKey> request_to_fill;
503  communicator.send_receive(procup, requested_ids[procup],
504  procdown, request_to_fill);
505 
506  // Fill the requests
507  std::vector<dof_id_type> global_ids; global_ids.reserve(request_to_fill.size());
508  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
509  {
510  const Parallel::DofObjectKey & hi = request_to_fill[idx];
511  libmesh_assert_less_equal (hi, elem_upper_bounds[communicator.rank()]);
512 
513  // find the requested index in my elem bin
514  std::vector<Parallel::DofObjectKey>::const_iterator pos =
515  std::lower_bound (my_elem_bin.begin(), my_elem_bin.end(), hi);
516  libmesh_assert (pos != my_elem_bin.end());
517  libmesh_assert_equal_to (*pos, hi);
518 
519  // Finally, assign the global index based off the position of the index
520  // in my array, properly offset.
521  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_elem_bin.begin(), pos) + my_offset));
522  }
523 
524  // and trade back
525  communicator.send_receive (procdown, global_ids,
526  procup, filled_request[procup]);
527  }
528 
529  // We now have all the filled requests, so we can loop through our
530  // elements once and assign the global index to each one.
531  {
532  std::vector<std::vector<dof_id_type>::const_iterator>
533  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
534  for (processor_id_type pid=0; pid<communicator.size(); pid++)
535  next_obj_on_proc.push_back(filled_request[pid].begin());
536 
537  {
539  const MeshBase::element_iterator end = mesh.elements_end();
540 
541  for (; it != end; ++it)
542  {
543  Elem * elem = (*it);
544  libmesh_assert(elem);
545  const Parallel::DofObjectKey hi =
546  get_hilbert_index (elem, bbox);
547  const processor_id_type pid =
548  cast_int<processor_id_type>
549  (std::distance (elem_upper_bounds.begin(),
550  std::lower_bound(elem_upper_bounds.begin(),
551  elem_upper_bounds.end(),
552  hi)));
553 
554  libmesh_assert_less (pid, communicator.size());
555  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
556 
557  const dof_id_type global_index = *next_obj_on_proc[pid];
558  libmesh_assert_less (global_index, mesh.n_elem());
559  elem->set_id() = global_index;
560 
561  ++next_obj_on_proc[pid];
562  }
563  }
564  }
565  }
566  }
567 }
A geometric point in (x,y,z) space associated with a DOF.
Definition: node.h:52
void parallel_for(const Range &range, const Body &body)
Definition: threads_none.h:73
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
MPI_Comm communicator
Definition: parallel.h:177
virtual element_iterator local_elements_begin()=0
The base class for all geometric element types.
Definition: elem.h:86
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
Utility class for defining generic ranges for threading.
Definition: stored_range.h:52
dof_id_type & set_id()
Definition: dof_object.h:633
libmesh_assert(j)
virtual node_iterator nodes_begin()=0
virtual element_iterator elements_begin()=0
BoundingBox bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:358
virtual node_iterator local_nodes_end()=0
virtual element_iterator elements_end()=0
virtual element_iterator local_elements_end()=0
virtual node_iterator local_nodes_begin()=0
virtual node_iterator nodes_end()=0
Object for performing parallel sorts using MPI.
Definition: parallel_sort.h:53
const Parallel::Communicator & comm() const
virtual dof_id_type n_nodes() const =0
virtual dof_id_type n_elem() const =0
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::MeshCommunication::broadcast ( MeshBase mesh) const

Finds all the processors that may contain elements that neighbor my elements. This list is guaranteed to include all processors that border any of my elements, but may include additional ones as well. This method computes bounding boxes for the elements on each processor and checks for overlaps. This method takes a mesh (which is assumed to reside on processor 0) and broadcasts it to all the other processors. It also broadcasts any boundary information the mesh has associated with it.

Definition at line 1103 of file mesh_communication.C.

References libMesh::Parallel::broadcast(), libMesh::Parallel::Communicator::broadcast(), libMesh::Parallel::Communicator::broadcast_packed_range(), libMesh::MeshBase::cache_elem_dims(), libMesh::MeshBase::clear(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::level_elements_begin(), libMesh::MeshBase::level_elements_end(), libMesh::libmesh_assert(), mesh, libMesh::MeshBase::n_elem(), libMesh::MeshTools::n_levels(), libMesh::MeshBase::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::MeshTools::paranoid_n_levels(), libMesh::ParallelObject::processor_id(), libMesh::BoundaryInfo::set_nodeset_name_map(), libMesh::BoundaryInfo::set_sideset_name_map(), libMesh::MeshBase::set_subdomain_name_map(), and libMesh::Parallel::Communicator::verify().

Referenced by libMesh::NameBasedIO::read(), libMesh::CheckpointIO::read(), and ~MeshCommunication().

1104 {
1105  // no MPI == one processor, no need for this method...
1106  return;
1107 }
void libMesh::MeshCommunication::check_for_duplicate_global_indices ( MeshBase mesh) const

Throw an error if we have any index clashes in the numbering used by assign_global_indices.

Definition at line 576 of file mesh_communication_global_indices.C.

References libMesh::MeshTools::bounding_box(), libMesh::err, libMesh::MeshBase::local_elements_begin(), libMesh::MeshBase::local_elements_end(), libMesh::MeshBase::local_nodes_begin(), libMesh::MeshBase::local_nodes_end(), and libMesh::Threads::parallel_for().

Referenced by allgather().

577 {
578  LOG_SCOPE ("check_for_duplicate_global_indices()", "MeshCommunication");
579 
580  // Global bounding box
581  BoundingBox bbox =
583 
584  std::vector<Parallel::DofObjectKey>
585  node_keys, elem_keys;
586 
587  {
588  // Nodes first
589  {
591  mesh.local_nodes_end());
592  node_keys.resize (nr.size());
593  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, node_keys));
594 
595  // It's O(N^2) to check that these keys don't duplicate before the
596  // sort...
598  for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
599  {
601  for (std::size_t j = 0; j != i; ++j, ++nodej)
602  {
603  if (node_keys[i] == node_keys[j])
604  {
605  CFixBitVec icoords[3], jcoords[3];
606  get_hilbert_coords(**nodej, bbox, jcoords);
607  libMesh::err <<
608  "node " << (*nodej)->id() << ", " <<
609  *(Point *)(*nodej) << " has HilbertIndices " <<
610  node_keys[j] << std::endl;
611  get_hilbert_coords(**nodei, bbox, icoords);
612  libMesh::err <<
613  "node " << (*nodei)->id() << ", " <<
614  *(Point *)(*nodei) << " has HilbertIndices " <<
615  node_keys[i] << std::endl;
616  libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
617  }
618  }
619  }
620  }
621 
622  // Elements next
623  {
625  mesh.local_elements_end());
626  elem_keys.resize (er.size());
627  Threads::parallel_for (er, ComputeHilbertKeys (bbox, elem_keys));
628 
629  // For elements, the keys can be (and in the case of TRI, are
630  // expected to be) duplicates, but only if the elements are at
631  // different levels
633  for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
634  {
636  for (std::size_t j = 0; j != i; ++j, ++elemj)
637  {
638  if ((elem_keys[i] == elem_keys[j]) &&
639  ((*elemi)->level() == (*elemj)->level()))
640  {
641  libMesh::err <<
642  "level " << (*elemj)->level() << " elem\n" <<
643  (**elemj) << " centroid " <<
644  (*elemj)->centroid() << " has HilbertIndices " <<
645  elem_keys[j] << " or " <<
646  get_hilbert_index((*elemj), bbox) <<
647  std::endl;
648  libMesh::err <<
649  "level " << (*elemi)->level() << " elem\n" <<
650  (**elemi) << " centroid " <<
651  (*elemi)->centroid() << " has HilbertIndices " <<
652  elem_keys[i] << " or " <<
653  get_hilbert_index((*elemi), bbox) <<
654  std::endl;
655  libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
656  }
657  }
658  }
659  }
660  } // done checking Hilbert keys
661 }
void parallel_for(const Range &range, const Body &body)
Definition: threads_none.h:73
virtual element_iterator local_elements_begin()=0
Utility class for defining generic ranges for threading.
Definition: stored_range.h:52
BoundingBox bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:358
virtual node_iterator local_nodes_end()=0
virtual element_iterator local_elements_end()=0
OStreamProxy err(std::cerr)
virtual node_iterator local_nodes_begin()=0
A geometric point in (x,y,z) space.
Definition: point.h:38
void libMesh::MeshCommunication::clear ( )

Clears all data structures and returns to a pristine state.

Definition at line 325 of file mesh_communication.C.

Referenced by ~MeshCommunication().

326 {
327  // _neighboring_processors.clear();
328 }
void libMesh::MeshCommunication::delete_remote_elements ( DistributedMesh mesh,
const std::set< Elem * > &  extra_ghost_elem_ids 
) const

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor deletes all elements which are neither local elements nor "ghost" elements which touch local elements, and deletes all nodes which are not contained in local or ghost elements. The end result is that a previously serial DistributedMesh will be distributed between processors. Since this method is collective it must be called by all processors.

The std::set is a list of extra elements that you don't want to delete. These will be left on the current processor along with local elements and ghosted neighbors.

Definition at line 1799 of file mesh_communication.C.

References libMesh::Elem::active_family_tree(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::DistributedMesh::delete_elem(), libMesh::DistributedMesh::delete_node(), libMesh::GhostingFunctor::delete_remote_elements(), libMesh::MeshBase::get_boundary_info(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DistributedMesh::is_serial(), libMesh::DistributedMesh::level_elements_begin(), libMesh::DistributedMesh::level_elements_end(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libmesh_nullptr, libMesh::Elem::make_links_to_me_remote(), libMesh::DistributedMesh::max_elem_id(), libMesh::DistributedMesh::max_node_id(), libMesh::MeshTools::n_levels(), libMesh::Elem::n_sides(), libMesh::DistributedMesh::nodes_begin(), libMesh::DistributedMesh::nodes_end(), libMesh::DistributedMesh::parallel_max_elem_id(), libMesh::DistributedMesh::parallel_max_node_id(), libMesh::ParallelObject::processor_id(), libMesh::BoundaryInfo::regenerate_id_sets(), libMesh::Elem::set_neighbor(), libMesh::Elem::subactive(), and libMesh::Parallel::Communicator::verify().

Referenced by allgather(), and libMesh::DistributedMesh::delete_remote_elements().

1801 {
1802  // The mesh should know it's about to be parallelized
1803  libmesh_assert (!mesh.is_serial());
1804 
1805  LOG_SCOPE("delete_remote_elements()", "MeshCommunication");
1806 
1807 #ifdef DEBUG
1808  // We expect maximum ids to be in sync so we can use them to size
1809  // vectors
1810  libmesh_assert(mesh.comm().verify(mesh.max_node_id()));
1811  libmesh_assert(mesh.comm().verify(mesh.max_elem_id()));
1812  const dof_id_type par_max_node_id = mesh.parallel_max_node_id();
1813  const dof_id_type par_max_elem_id = mesh.parallel_max_elem_id();
1814  libmesh_assert_equal_to (par_max_node_id, mesh.max_node_id());
1815  libmesh_assert_equal_to (par_max_elem_id, mesh.max_elem_id());
1816 #endif
1817 
1818  std::set<const Elem *, CompareElemIdsByLevel> elements_to_keep;
1819 
1820  // Don't delete elements that we were explicitly told not to
1821  for(std::set<Elem *>::iterator it = extra_ghost_elem_ids.begin();
1822  it != extra_ghost_elem_ids.end(); ++it)
1823  {
1824  const Elem * elem = *it;
1825 
1826  std::vector<const Elem *> active_family;
1827 #ifdef LIBMESH_ENABLE_AMR
1828  elem->active_family_tree(active_family);
1829 #else
1830  active_family.insert(elem);
1831 #endif
1832 
1833  for (std::size_t i=0; i != active_family.size(); ++i)
1834  elements_to_keep.insert(active_family[i]);
1835  }
1836 
1837  // See which elements we still need to keep ghosted, given that
1838  // we're keeping local and unpartitioned elements.
1839  query_ghosting_functors(mesh, mesh.processor_id(),
1840  false, elements_to_keep);
1841  query_ghosting_functors(mesh, DofObject::invalid_processor_id,
1842  false, elements_to_keep);
1843 
1844  // The inactive elements we need to send should have their
1845  // immediate children present.
1846  connect_children(mesh, mesh.processor_id(), elements_to_keep);
1847  connect_children(mesh, DofObject::invalid_processor_id, elements_to_keep);
1848 
1849  // The elements we need should have their ancestors and their
1850  // subactive children present too.
1851  connect_families(elements_to_keep);
1852 
1853  // Don't delete nodes that our semilocal elements need
1854  std::set<const Node *> connected_nodes;
1855  reconnect_nodes(elements_to_keep, connected_nodes);
1856 
1857  // Delete all the elements we have no reason to save,
1858  // starting with the most refined so that the mesh
1859  // is valid at all intermediate steps
1860  unsigned int n_levels = MeshTools::n_levels(mesh);
1861 
1862  for (int l = n_levels - 1; l >= 0; --l)
1863  {
1864  MeshBase::element_iterator lev_elem_it = mesh.level_elements_begin(l),
1865  lev_end = mesh.level_elements_end(l);
1866  for (; lev_elem_it != lev_end; ++lev_elem_it)
1867  {
1868  Elem * elem = *lev_elem_it;
1869  libmesh_assert (elem);
1870  // Make sure we don't leave any invalid pointers
1871  const bool keep_me = elements_to_keep.count(elem);
1872 
1873  if (!keep_me)
1874  elem->make_links_to_me_remote();
1875 
1876  // Subactive neighbor pointers aren't preservable here
1877  if (elem->subactive())
1878  for (unsigned int s=0; s != elem->n_sides(); ++s)
1879  elem->set_neighbor(s, libmesh_nullptr);
1880 
1881  // delete_elem doesn't currently invalidate element
1882  // iterators... that had better not change
1883  if (!keep_me)
1884  mesh.delete_elem(elem);
1885  }
1886  }
1887 
1888  // Delete all the nodes we have no reason to save
1889  MeshBase::node_iterator node_it = mesh.nodes_begin(),
1890  node_end = mesh.nodes_end();
1891  for (node_it = mesh.nodes_begin(); node_it != node_end; ++node_it)
1892  {
1893  Node * node = *node_it;
1894  libmesh_assert(node);
1895  if (!connected_nodes.count(node))
1896  mesh.delete_node(node);
1897  }
1898 
1899  // If we had a point locator, it's invalid now that some of the
1900  // elements it pointed to have been deleted.
1901  mesh.clear_point_locator();
1902 
1903  // Much of our boundary info may have been for now-remote parts of
1904  // the mesh, in which case we don't want to keep local copies.
1906 
1907  // We now have all remote elements and nodes deleted; our ghosting
1908  // functors should be ready to delete any now-redundant cached data
1909  // they use too.
1910  std::set<GhostingFunctor *>::iterator gf_it = mesh.ghosting_functors_begin();
1911  const std::set<GhostingFunctor *>::iterator gf_end = mesh.ghosting_functors_end();
1912  for (; gf_it != gf_end; ++gf_it)
1913  {
1914  GhostingFunctor *gf = *gf_it;
1915  gf->delete_remote_elements();
1916  }
1917 
1918 #ifdef DEBUG
1920 #endif
1921 }
const BoundaryInfo & get_boundary_info() const
Definition: mesh_base.h:111
A geometric point in (x,y,z) space associated with a DOF.
Definition: node.h:52
bool subactive() const
Definition: elem.h:2002
void libmesh_assert_valid_refinement_tree(const MeshBase &mesh)
Definition: mesh_tools.C:1646
The base class for all geometric element types.
Definition: elem.h:86
const class libmesh_nullptr_t libmesh_nullptr
dof_id_type parallel_max_elem_id() const
void make_links_to_me_remote()
Definition: elem.C:1284
void active_family_tree(std::vector< const Elem * > &active_family, const bool reset=true) const
Definition: elem.C:1621
virtual node_iterator nodes_end() libmesh_override
virtual element_iterator level_elements_end(unsigned int level) libmesh_override
libmesh_assert(j)
std::set< GhostingFunctor * >::const_iterator ghosting_functors_end() const
Definition: mesh_base.h:761
dof_id_type parallel_max_node_id() const
virtual element_iterator level_elements_begin(unsigned int level) libmesh_override
static const processor_id_type invalid_processor_id
Definition: dof_object.h:346
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:568
void clear_point_locator()
Definition: mesh_base.C:552
virtual dof_id_type max_node_id() const libmesh_override
virtual bool is_serial() const libmesh_override
virtual unsigned int n_sides() const =0
void set_neighbor(const unsigned int i, Elem *n)
Definition: elem.h:1780
virtual node_iterator nodes_begin() libmesh_override
std::set< GhostingFunctor * >::const_iterator ghosting_functors_begin() const
Definition: mesh_base.h:755
const Parallel::Communicator & comm() const
virtual void delete_remote_elements()
virtual void delete_elem(Elem *e) libmesh_override
virtual dof_id_type max_elem_id() const libmesh_override
processor_id_type processor_id() const
uint8_t dof_id_type
Definition: id_types.h:64
virtual void delete_node(Node *n) libmesh_override
template<typename ForwardIterator >
void libMesh::MeshCommunication::find_global_indices ( const Parallel::Communicator communicator,
const MeshTools::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::vector< dof_id_type > &  index_map 
) const

This method determines a globally unique, partition-agnostic index for each object in the input range.

Definition at line 670 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Communicator::allgather(), libMesh::Parallel::Sort< KeyType, IdxType >::bin(), end, libMesh::MeshTools::Generation::Private::idx(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), std::max(), libMesh::Parallel::Communicator::rank(), libMesh::Real, libMesh::Parallel::Communicator::send_receive(), libMesh::Parallel::Communicator::size(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by allgather(), libMesh::ParmetisPartitioner::initialize(), libMesh::MetisPartitioner::partition_range(), and libMesh::Partitioner::partition_unpartitioned_elements().

675 {
676  LOG_SCOPE ("find_global_indices()", "MeshCommunication");
677 
678  // This method determines partition-agnostic global indices
679  // for nodes and elements.
680 
681  // Algorithm:
682  // (1) compute the Hilbert key for each local node/element
683  // (2) perform a parallel sort of the Hilbert key
684  // (3) get the min/max value on each processor
685  // (4) determine the position in the global ranking for
686  // each local object
687  index_map.clear();
688  std::size_t n_objects = std::distance (begin, end);
689  index_map.reserve(n_objects);
690 
691  //-------------------------------------------------------------
692  // (1) compute Hilbert keys
693  // These aren't trivial to compute, and we will need them again.
694  // But the binsort will sort the input vector, trashing the order
695  // that we'd like to rely on. So, two vectors...
696  std::vector<Parallel::DofObjectKey>
697  sorted_hilbert_keys,
698  hilbert_keys;
699  sorted_hilbert_keys.reserve(n_objects);
700  hilbert_keys.reserve(n_objects);
701  {
702  LOG_SCOPE("compute_hilbert_indices()", "MeshCommunication");
703  for (ForwardIterator it=begin; it!=end; ++it)
704  {
705  const Parallel::DofObjectKey hi(get_hilbert_index (*it, bbox));
706  hilbert_keys.push_back(hi);
707 
708  if ((*it)->processor_id() == communicator.rank())
709  sorted_hilbert_keys.push_back(hi);
710 
711  // someone needs to take care of unpartitioned objects!
712  if ((communicator.rank() == 0) &&
713  ((*it)->processor_id() == DofObject::invalid_processor_id))
714  sorted_hilbert_keys.push_back(hi);
715  }
716  }
717 
718  //-------------------------------------------------------------
719  // (2) parallel sort the Hilbert keys
720  START_LOG ("parallel_sort()", "MeshCommunication");
721  Parallel::Sort<Parallel::DofObjectKey> sorter (communicator,
722  sorted_hilbert_keys);
723  sorter.sort();
724  STOP_LOG ("parallel_sort()", "MeshCommunication");
725  const std::vector<Parallel::DofObjectKey> & my_bin = sorter.bin();
726 
727  // The number of objects in my_bin on each processor
728  std::vector<unsigned int> bin_sizes(communicator.size());
729  communicator.allgather (static_cast<unsigned int>(my_bin.size()), bin_sizes);
730 
731  // The offset of my first global index
732  unsigned int my_offset = 0;
733  for (unsigned int pid=0; pid<communicator.rank(); pid++)
734  my_offset += bin_sizes[pid];
735 
736  //-------------------------------------------------------------
737  // (3) get the max value on each processor
738  std::vector<Parallel::DofObjectKey>
739  upper_bounds(1);
740 
741  if (!my_bin.empty())
742  upper_bounds[0] = my_bin.back();
743 
744  communicator.allgather (upper_bounds, /* identical_buffer_sizes = */ true);
745 
746  // Be cereful here. The *_upper_bounds will be used to find the processor
747  // a given object belongs to. So, if a processor contains no objects (possible!)
748  // then copy the bound from the lower processor id.
749  for (unsigned int p=1; p<communicator.size(); p++)
750  if (!bin_sizes[p]) upper_bounds[p] = upper_bounds[p-1];
751 
752 
753  //-------------------------------------------------------------
754  // (4) determine the position in the global ranking for
755  // each local object
756  {
757  //----------------------------------------------
758  // all objects, not just local ones
759 
760  // Request sets to send to each processor
761  std::vector<std::vector<Parallel::DofObjectKey> >
762  requested_ids (communicator.size());
763  // Results to gather from each processor
764  std::vector<std::vector<dof_id_type> >
765  filled_request (communicator.size());
766 
767  // build up list of requests
768  std::vector<Parallel::DofObjectKey>::const_iterator hi =
769  hilbert_keys.begin();
770 
771  for (ForwardIterator it = begin; it != end; ++it)
772  {
773  libmesh_assert (hi != hilbert_keys.end());
774 
775  std::vector<Parallel::DofObjectKey>::iterator lb =
776  std::lower_bound(upper_bounds.begin(), upper_bounds.end(),
777  *hi);
778 
779  const processor_id_type pid =
780  cast_int<processor_id_type>
781  (std::distance (upper_bounds.begin(), lb));
782 
783  libmesh_assert_less (pid, communicator.size());
784 
785  requested_ids[pid].push_back(*hi);
786 
787  ++hi;
788  // go ahead and put pid in index_map, that way we
789  // don't have to repeat the std::lower_bound()
790  index_map.push_back(pid);
791  }
792 
793  // start with pid=0, so that we will trade with ourself
794  std::vector<Parallel::DofObjectKey> request_to_fill;
795  std::vector<dof_id_type> global_ids;
796  for (processor_id_type pid=0; pid<communicator.size(); pid++)
797  {
798  // Trade my requests with processor procup and procdown
799  const processor_id_type procup = cast_int<processor_id_type>
800  ((communicator.rank() + pid) % communicator.size());
801  const processor_id_type procdown = cast_int<processor_id_type>
802  ((communicator.size() + communicator.rank() - pid) %
803  communicator.size());
804 
805  communicator.send_receive(procup, requested_ids[procup],
806  procdown, request_to_fill);
807 
808  // Fill the requests
809  global_ids.clear(); global_ids.reserve(request_to_fill.size());
810  for (std::size_t idx=0; idx<request_to_fill.size(); idx++)
811  {
812  const Parallel::DofObjectKey & hilbert_indices = request_to_fill[idx];
813  libmesh_assert_less_equal (hilbert_indices, upper_bounds[communicator.rank()]);
814 
815  // find the requested index in my node bin
816  std::vector<Parallel::DofObjectKey>::const_iterator pos =
817  std::lower_bound (my_bin.begin(), my_bin.end(), hilbert_indices);
818  libmesh_assert (pos != my_bin.end());
819 #ifdef DEBUG
820  // If we could not find the requested Hilbert index in
821  // my_bin, something went terribly wrong, possibly the
822  // Mesh was displaced differently on different processors,
823  // and therefore the Hilbert indices don't agree.
824  if (*pos != hilbert_indices)
825  {
826  // The input will be hilbert_indices. We convert it
827  // to BitVecType using the operator= provided by the
828  // BitVecType class. BitVecType is a CBigBitVec!
829  Hilbert::BitVecType input;
830 #ifdef LIBMESH_ENABLE_UNIQUE_ID
831  input = hilbert_indices.first;
832 #else
833  input = hilbert_indices;
834 #endif
835 
836  // Get output in a vector of CBigBitVec
837  std::vector<CBigBitVec> output(3);
838 
839  // Call the indexToCoords function
840  Hilbert::indexToCoords(&output[0], 8*sizeof(Hilbert::inttype), 3, input);
841 
842  // The entries in the output racks are integers in the
843  // range [0, Hilbert::inttype::max] which can be
844  // converted to floating point values in [0,1] and
845  // finally to actual values using the bounding box.
846  Real max_int_as_real = static_cast<Real>(std::numeric_limits<Hilbert::inttype>::max());
847 
848  // Get the points in [0,1]^3. The zeroth rack of each entry in
849  // 'output' maps to the normalized x, y, and z locations,
850  // respectively.
851  Point p_hat(static_cast<Real>(output[0].racks()[0]) / max_int_as_real,
852  static_cast<Real>(output[1].racks()[0]) / max_int_as_real,
853  static_cast<Real>(output[2].racks()[0]) / max_int_as_real);
854 
855  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
856  Real
857  xmin = bbox.first(0),
858  xmax = bbox.second(0),
859  ymin = bbox.first(1),
860  ymax = bbox.second(1),
861  zmin = bbox.first(2),
862  zmax = bbox.second(2);
863 
864  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
865  Point p(xmin + (xmax-xmin)*p_hat(0),
866  ymin + (ymax-ymin)*p_hat(1),
867  zmin + (zmax-zmin)*p_hat(2));
868 
869  libmesh_error_msg("Could not find hilbert indices: "
870  << hilbert_indices
871  << " corresponding to point " << p);
872  }
873 #endif
874 
875  // Finally, assign the global index based off the position of the index
876  // in my array, properly offset.
877  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_bin.begin(), pos) + my_offset));
878  }
879 
880  // and trade back
881  communicator.send_receive (procdown, global_ids,
882  procup, filled_request[procup]);
883  }
884 
885  // We now have all the filled requests, so we can loop through our
886  // nodes once and assign the global index to each one.
887  {
888  std::vector<std::vector<dof_id_type>::const_iterator>
889  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
890  for (unsigned int pid=0; pid<communicator.size(); pid++)
891  next_obj_on_proc.push_back(filled_request[pid].begin());
892 
893  unsigned int cnt=0;
894  for (ForwardIterator it = begin; it != end; ++it, cnt++)
895  {
896  const processor_id_type pid = cast_int<processor_id_type>
897  (index_map[cnt]);
898 
899  libmesh_assert_less (pid, communicator.size());
900  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
901 
902  const dof_id_type global_index = *next_obj_on_proc[pid];
903  index_map[cnt] = global_index;
904 
905  ++next_obj_on_proc[pid];
906  }
907  }
908  }
909 
910  libmesh_assert_equal_to(index_map.size(), n_objects);
911 }
unsigned int size() const
Definition: parallel.h:679
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
long double max(long double a, double b)
libmesh_assert(j)
void send_receive(const unsigned int dest_processor_id, const T1 &send, const unsigned int source_processor_id, T2 &recv, const MessageTag &send_tag=no_tag, const MessageTag &recv_tag=any_tag) const
static const processor_id_type invalid_processor_id
Definition: dof_object.h:346
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real
Object for performing parallel sorts using MPI.
Definition: parallel_sort.h:53
unsigned int rank() const
Definition: parallel.h:677
A geometric point in (x,y,z) space.
Definition: point.h:38
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
uint8_t dof_id_type
Definition: id_types.h:64
void allgather(const T &send, std::vector< T > &recv) const
void libMesh::MeshCommunication::gather ( const processor_id_type  root_id,
DistributedMesh mesh 
) const

This method takes an input DistributedMesh which may be distributed among all the processors. Each processor then sends its local nodes and elements to processor root_id. The end result is that a previously distributed DistributedMesh will be serialized on processor root_id. Since this method is collective it must be called by all processors. For the special case of root_id equal to DofObject::invalid_processor_id this function performs an allgather.

Definition at line 1175 of file mesh_communication.C.

References libMesh::Parallel::Communicator::allgather_packed_range(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::UnstructuredMesh::find_neighbors(), libMesh::Parallel::gather(), libMesh::Parallel::Communicator::gather_packed_range(), libMesh::DistributedMesh::level_elements_begin(), libMesh::DistributedMesh::level_elements_end(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_boundary_ids(), mesh, libMesh::DistributedMesh::n_elem(), libMesh::MeshTools::n_levels(), libMesh::DistributedMesh::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::DistributedMesh::nodes_begin(), libMesh::DistributedMesh::nodes_end(), libMesh::ParallelObject::processor_id(), renumber, and libMesh::Parallel::Communicator::verify().

Referenced by allgather(), libMesh::DistributedMesh::gather_to_zero(), and ~MeshCommunication().

1176 {
1177  // no MPI == one processor, no need for this method...
1178  return;
1179 }
void libMesh::MeshCommunication::gather_neighboring_elements ( DistributedMesh mesh) const
void libMesh::MeshCommunication::make_elems_parallel_consistent ( MeshBase mesh)

Copy ids of ghost elements from their local processors.

Definition at line 1502 of file mesh_communication.C.

References libMesh::MeshBase::active_elements_begin(), libMesh::MeshBase::active_elements_end(), libMesh::ParallelObject::comm(), libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Parallel::sync_element_data_by_parent_id().

Referenced by libMesh::MeshRefinement::_refine_elements(), and allgather().

1503 {
1504  // This function must be run on all processors at once
1505  libmesh_parallel_only(mesh.comm());
1506 
1507  LOG_SCOPE ("make_elems_parallel_consistent()", "MeshCommunication");
1508 
1509  SyncIds syncids(mesh, &MeshBase::renumber_elem);
1511  (mesh, mesh.active_elements_begin(),
1512  mesh.active_elements_end(), syncids);
1513 
1514 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1515  SyncUniqueIds<Elem> syncuniqueids(mesh, &MeshBase::query_elem_ptr);
1517  (mesh.comm(), mesh.active_elements_begin(),
1518  mesh.active_elements_end(), syncuniqueids);
1519 #endif
1520 }
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
virtual element_iterator active_elements_begin()=0
virtual element_iterator active_elements_end()=0
virtual const Elem * query_elem_ptr(const dof_id_type i) const =0
virtual void renumber_elem(dof_id_type old_id, dof_id_type new_id)=0
const Parallel::Communicator & comm() const
void sync_element_data_by_parent_id(MeshBase &mesh, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
void libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.

Definition at line 1688 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1689 {
1690  LOG_SCOPE ("make_new_node_proc_ids_parallel_consistent()", "MeshCommunication");
1691 
1692  // This function must be run on all processors at once
1693  libmesh_parallel_only(mesh.comm());
1694 
1695  // When this function is called, each section of a parallelized mesh
1696  // should be in the following state:
1697  //
1698  // Local nodes should have unique authoritative ids,
1699  // and processor ids consistent with all processors which own
1700  // an element touching them.
1701  //
1702  // Ghost nodes touching local elements should have processor ids
1703  // consistent with all processors which own an element touching
1704  // them.
1705  SyncProcIds sync(mesh);
1707  (mesh, mesh.elements_begin(), mesh.elements_end(),
1708  ElemNodesMaybeNew(), NodeMaybeNew(), sync);
1709 }
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_new_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on new nodes from their local processors.

Definition at line 1756 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), and libMesh::MeshTools::correct_node_proc_ids().

Referenced by libMesh::MeshRefinement::_refine_elements(), and allgather().

1757 {
1758  // This function must be run on all processors at once
1759  libmesh_parallel_only(mesh.comm());
1760 
1761  // When this function is called, each section of a parallelized mesh
1762  // should be in the following state:
1763  //
1764  // All nodes should have the exact same physical location on every
1765  // processor where they exist.
1766  //
1767  // Local nodes should have unique authoritative ids,
1768  // and processor ids consistent with all processors which own
1769  // an element touching them.
1770  //
1771  // Ghost nodes touching local elements should have processor ids
1772  // consistent with all processors which own an element touching
1773  // them.
1774  //
1775  // Ghost nodes should have ids which are either already correct
1776  // or which are in the "unpartitioned" id space.
1777 
1778  // First, let's sync up processor ids. Some of these processor ids
1779  // may be "wrong" from coarsening, but they're right in the sense
1780  // that they'll tell us who has the authoritative dofobject ids for
1781  // each node.
1782 
1784 
1785  // Second, sync up dofobject ids.
1787 
1788  // Third, sync up dofobject unique_ids if applicable.
1790 
1791  // Finally, correct the processor ids to make DofMap happy
1793 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
void correct_node_proc_ids(MeshBase &)
Definition: mesh_tools.C:1806
void make_new_node_proc_ids_parallel_consistent(MeshBase &)
void make_node_ids_parallel_consistent(MeshBase &)
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_ids_parallel_consistent ( MeshBase mesh)

Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent.

Definition at line 1463 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1464 {
1465  // This function must be run on all processors at once
1466  libmesh_parallel_only(mesh.comm());
1467 
1468  LOG_SCOPE ("make_node_ids_parallel_consistent()", "MeshCommunication");
1469 
1470  SyncNodeIds syncids(mesh);
1472  (mesh, mesh.elements_begin(), mesh.elements_end(),
1474 }
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

Definition at line 1659 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by allgather().

1660 {
1661  LOG_SCOPE ("make_node_proc_ids_parallel_consistent()", "MeshCommunication");
1662 
1663  // This function must be run on all processors at once
1664  libmesh_parallel_only(mesh.comm());
1665 
1666  // When this function is called, each section of a parallelized mesh
1667  // should be in the following state:
1668  //
1669  // All nodes should have the exact same physical location on every
1670  // processor where they exist.
1671  //
1672  // Local nodes should have unique authoritative ids,
1673  // and processor ids consistent with all processors which own
1674  // an element touching them.
1675  //
1676  // Ghost nodes touching local elements should have processor ids
1677  // consistent with all processors which own an element touching
1678  // them.
1679  SyncProcIds sync(mesh);
1681  (mesh, mesh.elements_begin(), mesh.elements_end(),
1683 }
virtual element_iterator elements_begin()=0
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
virtual element_iterator elements_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent ( MeshBase mesh)

Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent.

Definition at line 1478 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::BoundaryInfo::add_elements(), allgather(), and libMesh::Nemesis_IO::read().

1479 {
1480  // Avoid unused variable warnings if unique ids aren't enabled.
1481  libmesh_ignore(mesh);
1482 
1483  // This function must be run on all processors at once
1484  libmesh_parallel_only(mesh.comm());
1485 
1486 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1487  LOG_SCOPE ("make_node_unique_ids_parallel_consistent()", "MeshCommunication");
1488 
1489  SyncUniqueIds<Node> syncuniqueids(mesh, &MeshBase::query_node_ptr);
1491  mesh.nodes_begin(),
1492  mesh.nodes_end(),
1493  syncuniqueids);
1494 
1495 #endif
1496 }
virtual node_iterator nodes_begin()=0
virtual const Node * query_node_ptr(const dof_id_type i) const =0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
void libmesh_ignore(const T &)
virtual node_iterator nodes_end()=0
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::make_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on ghost nodes from their local processors. This is useful for code which wants to add nodes to a distributed mesh.

Definition at line 1714 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), and libMesh::MeshTools::correct_node_proc_ids().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), libMesh::UnstructuredMesh::all_second_order(), libMesh::MeshTools::Modification::all_tri(), and allgather().

1715 {
1716  // This function must be run on all processors at once
1717  libmesh_parallel_only(mesh.comm());
1718 
1719  // When this function is called, each section of a parallelized mesh
1720  // should be in the following state:
1721  //
1722  // All nodes should have the exact same physical location on every
1723  // processor where they exist.
1724  //
1725  // Local nodes should have unique authoritative ids,
1726  // and processor ids consistent with all processors which own
1727  // an element touching them.
1728  //
1729  // Ghost nodes touching local elements should have processor ids
1730  // consistent with all processors which own an element touching
1731  // them.
1732  //
1733  // Ghost nodes should have ids which are either already correct
1734  // or which are in the "unpartitioned" id space.
1735 
1736  // First, let's sync up processor ids. Some of these processor ids
1737  // may be "wrong" from coarsening, but they're right in the sense
1738  // that they'll tell us who has the authoritative dofobject ids for
1739  // each node.
1740 
1742 
1743  // Second, sync up dofobject ids.
1745 
1746  // Third, sync up dofobject unique_ids if applicable.
1748 
1749  // Finally, correct the processor ids to make DofMap happy
1751 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
void correct_node_proc_ids(MeshBase &)
Definition: mesh_tools.C:1806
void make_node_ids_parallel_consistent(MeshBase &)
const Parallel::Communicator & comm() const
void make_node_proc_ids_parallel_consistent(MeshBase &)
void libMesh::MeshCommunication::make_p_levels_parallel_consistent ( MeshBase mesh)

Copy p levels of ghost elements from their local processors.

Definition at line 1526 of file mesh_communication.C.

References libMesh::Elem::as_parent_node(), libMesh::ParallelObject::comm(), data, libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), libMesh::invalid_uint, libMesh::Elem::is_node_on_side(), mesh, libMesh::Elem::n_neighbors(), libMesh::Elem::neighbor(), libMesh::MeshBase::node_ref(), libMesh::Elem::parent(), libMesh::DofObject::processor_id(), libMesh::Elem::refinement_flag(), libMesh::remote_elem, libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Elem::which_child_am_i().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), libMesh::MeshRefinement::_refine_elements(), and allgather().

1527 {
1528  // This function must be run on all processors at once
1529  libmesh_parallel_only(mesh.comm());
1530 
1531  LOG_SCOPE ("make_p_levels_parallel_consistent()", "MeshCommunication");
1532 
1533  SyncPLevels syncplevels(mesh);
1535  (mesh.comm(), mesh.elements_begin(), mesh.elements_end(),
1536  syncplevels);
1537 }
virtual element_iterator elements_begin()=0
virtual element_iterator elements_end()=0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
const Parallel::Communicator & comm() const
void libMesh::MeshCommunication::redistribute ( DistributedMesh mesh,
bool  newly_coarsened_only = false 
) const

This method takes a parallel distributed mesh and redistributes the elements. Specifically, any elements stored on a given processor are sent to the processor which "owns" them. Similarly, any elements assigned to the current processor but stored on another are received. Once this step is completed any required ghost elements are updated. The final result is that each processor stores only the elements it actually owns and any ghost elements required to satisfy data dependencies. This method can be invoked after a partitioning step to affect the new partitioning.

Redistribution can also be done with newly coarsened elements' neighbors only.

Definition at line 334 of file mesh_communication.C.

References libMesh::Parallel::Communicator::alltoall(), libMesh::Parallel::any_source, libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::Parallel::Communicator::get_unique_tag(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DistributedMesh::is_serial(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_equal_n_systems(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libmesh_nullptr, libMesh::MeshTools::n_elem(), libMesh::ParallelObject::n_processors(), libMesh::ParallelObject::processor_id(), libMesh::Parallel::Communicator::receive_packed_range(), libMesh::MeshTools::Modification::redistribute(), libMesh::GhostingFunctor::redistribute(), libMesh::Parallel::Communicator::send_packed_range(), libMesh::DistributedMesh::unpartitioned_elements_begin(), libMesh::DistributedMesh::unpartitioned_elements_end(), and libMesh::Parallel::wait().

Referenced by libMesh::DistributedMesh::redistribute(), and ~MeshCommunication().

335 {
336  // no MPI == one processor, no redistribution
337  return;
338 }

The documentation for this class was generated from the following files: