libMesh::MetisPartitioner Class Reference

Partitioner which interfaces with the METIS library. More...

#include <metis_partitioner.h>

Inheritance diagram for libMesh::MetisPartitioner:

Public Member Functions

 MetisPartitioner ()
 
virtual UniquePtr< Partitionerclone () const libmesh_override
 
virtual void attach_weights (ErrorVector *weights) libmesh_override
 
virtual void partition_range (MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) libmesh_override
 
virtual void partition (MeshBase &mesh, const unsigned int n)
 
virtual void partition (MeshBase &mesh)
 
void repartition (MeshBase &mesh, const unsigned int n)
 
void repartition (MeshBase &mesh)
 

Static Public Member Functions

static void partition_unpartitioned_elements (MeshBase &mesh)
 
static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n)
 
static void set_parent_processor_ids (MeshBase &mesh)
 
static void set_node_processor_ids (MeshBase &mesh)
 

Protected Member Functions

virtual void _do_partition (MeshBase &mesh, const unsigned int n) libmesh_override
 
void single_partition (MeshBase &mesh)
 
void single_partition_range (MeshBase::element_iterator it, MeshBase::element_iterator end)
 
virtual void _do_repartition (MeshBase &mesh, const unsigned int n)
 

Protected Attributes

ErrorVector_weights
 

Static Protected Attributes

static const dof_id_type communication_blocksize = 1000000
 

Detailed Description

Partitioner which interfaces with the METIS library.

The MetisPartitioner uses the Metis graph partitioner to partition the elements.

Author
Benjamin S. Kirk
Date
2003

Definition at line 37 of file metis_partitioner.h.

Constructor & Destructor Documentation

libMesh::MetisPartitioner::MetisPartitioner ( )
inline

Constructor.

Definition at line 44 of file metis_partitioner.h.

Referenced by clone().

44 {}

Member Function Documentation

void libMesh::MetisPartitioner::_do_partition ( MeshBase mesh,
const unsigned int  n 
)
protectedvirtual

Partition the MeshBase into n subdomains.

Implements libMesh::Partitioner.

Definition at line 528 of file metis_partitioner.C.

References libMesh::MeshBase::active_elements_begin(), and libMesh::MeshBase::active_elements_end().

Referenced by attach_weights().

530 {
531  this->partition_range(mesh,
532  mesh.active_elements_begin(),
533  mesh.active_elements_end(),
534  n_pieces);
535 }
MeshBase & mesh
virtual void partition_range(MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) libmesh_override
virtual void libMesh::Partitioner::_do_repartition ( MeshBase mesh,
const unsigned int  n 
)
inlineprotectedvirtualinherited

This is the actual re-partitioning method which can be overloaded in derived classes. Note that the default behavior is to simply call the partition function.

Reimplemented in libMesh::ParmetisPartitioner.

Definition at line 201 of file partitioner.h.

References libMesh::Partitioner::_do_partition().

Referenced by libMesh::Partitioner::repartition().

202  { this->_do_partition (mesh, n); }
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
virtual void libMesh::MetisPartitioner::attach_weights ( ErrorVector )
inlinevirtual

Attach weights that can be used for partitioning. This ErrorVector should be exactly the same on every processor and should have mesh->max_elem_id() entries.

Reimplemented from libMesh::Partitioner.

Definition at line 55 of file metis_partitioner.h.

References _do_partition(), libMesh::Partitioner::_weights, end, mesh, and partition_range().

55 { _weights = weights; }
ErrorVector * _weights
Definition: partitioner.h:213
virtual UniquePtr<Partitioner> libMesh::MetisPartitioner::clone ( ) const
inlinevirtual

Creates a new partitioner of this type and returns it in a UniquePtr.

Implements libMesh::Partitioner.

Definition at line 50 of file metis_partitioner.h.

References MetisPartitioner().

51  {
52  return UniquePtr<Partitioner>(new MetisPartitioner());
53  }
void libMesh::Partitioner::partition ( MeshBase mesh,
const unsigned int  n 
)
virtualinherited

Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.

NOTE: If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 49 of file partitioner.C.

References libMesh::Partitioner::_do_partition(), libMesh::ParallelObject::comm(), libMesh::MeshTools::libmesh_assert_valid_remote_elems(), mesh, std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::redistribute(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), libMesh::Partitioner::single_partition(), and libMesh::MeshBase::update_post_partitioning().

Referenced by libMesh::ParmetisPartitioner::_do_repartition(), libMesh::Partitioner::partition(), and libMesh::Partitioner::~Partitioner().

51 {
52  libmesh_parallel_only(mesh.comm());
53 
54  // BSK - temporary fix while redistribution is integrated 6/26/2008
55  // Uncomment this to not repartition in parallel
56  // if (!mesh.is_serial())
57  // return;
58 
59  // we cannot partition into more pieces than we have
60  // active elements!
61  const unsigned int n_parts =
62  static_cast<unsigned int>
63  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
64 
65  // Set the number of partitions in the mesh
66  mesh.set_n_partitions()=n_parts;
67 
68  if (n_parts == 1)
69  {
70  this->single_partition (mesh);
71  return;
72  }
73 
74  // First assign a temporary partitioning to any unpartitioned elements
76 
77  // Call the partitioning function
78  this->_do_partition(mesh,n_parts);
79 
80  // Set the parent's processor ids
82 
83  // Redistribute elements if necessary, before setting node processor
84  // ids, to make sure those will be set consistently
85  mesh.redistribute();
86 
87 #ifdef DEBUG
89 
90  // Messed up elem processor_id()s can leave us without the child
91  // elements we need to restrict vectors on a distributed mesh
92  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
93 #endif
94 
95  // Set the node's processor ids
97 
98 #ifdef DEBUG
99  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
100 #endif
101 
102  // Give derived Mesh classes a chance to update any cached data to
103  // reflect the new partitioning
104  mesh.update_post_partitioning();
105 }
void single_partition(MeshBase &mesh)
Definition: partitioner.C:151
void libmesh_assert_valid_remote_elems(const MeshBase &mesh)
Definition: mesh_tools.C:1040
static void set_node_processor_ids(MeshBase &mesh)
Definition: partitioner.C:431
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::partition ( MeshBase mesh)
virtualinherited

Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems.

NOTE: If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 42 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::partition().

43 {
44  this->partition(mesh,mesh.n_processors());
45 }
MeshBase & mesh
virtual void partition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.C:49
void libMesh::MetisPartitioner::partition_range ( MeshBase mesh,
MeshBase::element_iterator  it,
MeshBase::element_iterator  end,
const unsigned int  n 
)
virtual

Called by the SubdomainPartitioner to partition elements in the range (it, end).

Reimplemented from libMesh::Partitioner.

Definition at line 58 of file metis_partitioner.C.

References libMesh::Elem::active(), libMesh::Elem::active_family_tree(), libMesh::MeshTools::bounding_box(), libMesh::Parallel::Communicator::broadcast(), libMesh::ParallelObject::comm(), libMesh::vectormap< Key, Tp >::count(), libMesh::Elem::dim(), end, libMesh::err, libMesh::vectormap< Key, Tp >::find(), libMesh::MeshCommunication::find_global_indices(), libMesh::Elem::find_interior_neighbors(), libMesh::DofObject::id(), libMesh::vectormap< Key, Tp >::insert(), libMesh::Elem::interior_parent(), libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libmesh_nullptr, std::max(), libMesh::Elem::n_neighbors(), libMesh::Elem::n_nodes(), libMesh::Elem::neighbor_ptr(), libMesh::METIS_CSR_Graph< IndexType >::offsets, libMesh::SFCPartitioner::partition_range(), libMesh::METIS_CSR_Graph< IndexType >::prep_n_nonzeros(), libMesh::METIS_CSR_Graph< IndexType >::prepare_for_use(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::MeshBase::query_elem_ptr(), libMesh::METIS_CSR_Graph< IndexType >::vals, and libMesh::Elem::which_neighbor_am_i().

Referenced by attach_weights().

62 {
63  libmesh_assert_greater (n_pieces, 0);
64  libmesh_assert (mesh.is_serial());
65 
66  // Check for an easy return
67  if (n_pieces == 1)
68  {
69  this->single_partition_range (beg, end);
70  return;
71  }
72 
73  // What to do if the Metis library IS NOT present
74 #ifndef LIBMESH_HAVE_METIS
75 
76  libmesh_here();
77  libMesh::err << "ERROR: The library has been built without" << std::endl
78  << "Metis support. Using a space-filling curve" << std::endl
79  << "partitioner instead!" << std::endl;
80 
81  SFCPartitioner sfcp;
82  sfcp.partition_range (mesh, beg, end, n_pieces);
83 
84  // What to do if the Metis library IS present
85 #else
86 
87  LOG_SCOPE("partition_range()", "MetisPartitioner");
88 
89  const dof_id_type n_range_elem = std::distance(beg, end);
90 
91  // Metis will only consider the elements in the range.
92  // We need to map the range element ids into a
93  // contiguous range. Further, we want the unique range indexing to be
94  // independent of the element ordering, otherwise a circular dependency
95  // can result in which the partitioning depends on the ordering which
96  // depends on the partitioning...
97  vectormap<dof_id_type, dof_id_type> global_index_map;
98  global_index_map.reserve (n_range_elem);
99 
100  {
101  std::vector<dof_id_type> global_index;
102 
103  MeshCommunication().find_global_indices (mesh.comm(),
105  beg, end, global_index);
106 
107  libmesh_assert_equal_to (global_index.size(), n_range_elem);
108 
109  MeshBase::element_iterator it = beg;
110  for (std::size_t cnt=0; it != end; ++it)
111  {
112  const Elem * elem = *it;
113 
114  global_index_map.insert (std::make_pair(elem->id(), global_index[cnt++]));
115  }
116  libmesh_assert_equal_to (global_index_map.size(), n_range_elem);
117  }
118 
119  // If we have boundary elements in this mesh, we want to account for
120  // the connectivity between them and interior elements. We can find
121  // interior elements from boundary elements, but we need to build up
122  // a lookup map to do the reverse.
123  typedef LIBMESH_BEST_UNORDERED_MULTIMAP<const Elem *, const Elem *> map_type;
124  map_type interior_to_boundary_map;
125 
126  {
127  MeshBase::element_iterator it = beg;
128  for (; it != end; ++it)
129  {
130  const Elem * elem = *it;
131 
132  // If we don't have an interior_parent then there's nothing
133  // to look us up.
134  if ((elem->dim() >= LIBMESH_DIM) ||
135  !elem->interior_parent())
136  continue;
137 
138  // get all relevant interior elements
139  std::set<const Elem *> neighbor_set;
140  elem->find_interior_neighbors(neighbor_set);
141 
142  std::set<const Elem *>::iterator n_it = neighbor_set.begin();
143  for (; n_it != neighbor_set.end(); ++n_it)
144  {
145  // FIXME - non-const versions of the std::set<const Elem
146  // *> returning methods would be nice
147  Elem * neighbor = const_cast<Elem *>(*n_it);
148 
149 #if defined(LIBMESH_HAVE_UNORDERED_MULTIMAP) || \
150  defined(LIBMESH_HAVE_TR1_UNORDERED_MULTIMAP) || \
151  defined(LIBMESH_HAVE_HASH_MULTIMAP) || \
152  defined(LIBMESH_HAVE_EXT_HASH_MULTIMAP)
153  interior_to_boundary_map.insert(std::make_pair(neighbor, elem));
154 #else
155  interior_to_boundary_map.insert(interior_to_boundary_map.begin(),
156  std::make_pair(neighbor, elem));
157 #endif
158  }
159  }
160  }
161 
162  // Data structure that Metis will fill up on processor 0 and broadcast.
163  std::vector<Metis::idx_t> part(n_range_elem);
164 
165  // Invoke METIS, but only on processor 0.
166  // Then broadcast the resulting decomposition
167  if (mesh.processor_id() == 0)
168  {
169  // Data structures and parameters needed only on processor 0 by Metis.
170  // std::vector<Metis::idx_t> options(5);
171  std::vector<Metis::idx_t> vwgt(n_range_elem);
172 
173  Metis::idx_t
174  n = static_cast<Metis::idx_t>(n_range_elem), // number of "nodes" (elements) in the graph
175  // wgtflag = 2, // weights on vertices only, none on edges
176  // numflag = 0, // C-style 0-based numbering
177  nparts = static_cast<Metis::idx_t>(n_pieces), // number of subdomains to create
178  edgecut = 0; // the numbers of edges cut by the resulting partition
179 
180  // Set the options
181  // options[0] = 0; // use default options
182 
183  // build the graph
184  METIS_CSR_Graph<Metis::idx_t> csr_graph;
185 
186  csr_graph.offsets.resize(n_range_elem + 1, 0);
187 
188  // Local scope for these
189  {
190  // build the graph in CSR format. Note that
191  // the edges in the graph will correspond to
192  // face neighbors
193 
194 #ifdef LIBMESH_ENABLE_AMR
195  std::vector<const Elem *> neighbors_offspring;
196 #endif
197 
198 #ifndef NDEBUG
199  std::size_t graph_size=0;
200 #endif
201 
202  // (1) first pass - get the row sizes for each element by counting the number
203  // of face neighbors. Also populate the vwght array if necessary
204  MeshBase::element_iterator it = beg;
205  for (; it != end; ++it)
206  {
207  const Elem * elem = *it;
208 
209  const dof_id_type elem_global_index =
210  global_index_map[elem->id()];
211 
212  libmesh_assert_less (elem_global_index, vwgt.size());
213 
214  // maybe there is a better weight?
215  // The weight is used to define what a balanced graph is
216  if (!_weights)
217  vwgt[elem_global_index] = elem->n_nodes();
218  else
219  vwgt[elem_global_index] = static_cast<Metis::idx_t>((*_weights)[elem->id()]);
220 
221  unsigned int num_neighbors = 0;
222 
223  // Loop over the element's neighbors. An element
224  // adjacency corresponds to a face neighbor
225  for (unsigned int ms=0; ms<elem->n_neighbors(); ms++)
226  {
227  const Elem * neighbor = elem->neighbor_ptr(ms);
228 
229  if (neighbor != libmesh_nullptr)
230  {
231  // If the neighbor is not in the range of elements
232  // being partitioned, treat it as a NULL neighbor.
233  //
234  // Note: vectormap::find() is a little weird, it
235  // returns lower_bound, so checking its return
236  // value against end() is not useful. Therefore,
237  // we vall vectormap::count() instead!
238  if (!global_index_map.count(neighbor->id()))
239  continue;
240 
241  // If the neighbor is active treat it
242  // as a connection
243  if (neighbor->active())
244  num_neighbors++;
245 
246 #ifdef LIBMESH_ENABLE_AMR
247 
248  // Otherwise we need to find all of the
249  // neighbor's children that are connected to
250  // us and add them
251  else
252  {
253  // The side of the neighbor to which
254  // we are connected
255  const unsigned int ns =
256  neighbor->which_neighbor_am_i (elem);
257  libmesh_assert_less (ns, neighbor->n_neighbors());
258 
259  // Get all the active children (& grandchildren, etc...)
260  // of the neighbor.
261 
262  // FIXME - this is the wrong thing, since we
263  // should be getting the active family tree on
264  // our side only. But adding too many graph
265  // links may cause hanging nodes to tend to be
266  // on partition interiors, which would reduce
267  // communication overhead for constraint
268  // equations, so we'll leave it.
269  neighbor->active_family_tree (neighbors_offspring);
270 
271  // Get all the neighbor's children that
272  // live on that side and are thus connected
273  // to us
274  for (std::size_t nc=0; nc<neighbors_offspring.size(); nc++)
275  {
276  const Elem * child =
277  neighbors_offspring[nc];
278 
279  // Skip neighbor offspring which are not in the range of elements being partitioned.
280  if (!global_index_map.count(child->id()))
281  continue;
282 
283  // This does not assume a level-1 mesh.
284  // Note that since children have sides numbered
285  // coincident with the parent then this is a sufficient test.
286  if (child->neighbor_ptr(ns) == elem)
287  {
288  libmesh_assert (child->active());
289  num_neighbors++;
290  }
291  }
292  }
293 
294 #endif /* ifdef LIBMESH_ENABLE_AMR */
295 
296  }
297  }
298 
299  // Check for any interior neighbors
300  if ((elem->dim() < LIBMESH_DIM) && elem->interior_parent())
301  {
302  // get all relevant interior elements
303  std::set<const Elem *> neighbor_set;
304  elem->find_interior_neighbors(neighbor_set);
305 
306  num_neighbors += neighbor_set.size();
307  }
308 
309  // Check for any boundary neighbors
310  typedef map_type::iterator map_it_type;
311  std::pair<map_it_type, map_it_type>
312  bounds = interior_to_boundary_map.equal_range(elem);
313  num_neighbors += std::distance(bounds.first, bounds.second);
314 
315  csr_graph.prep_n_nonzeros(elem_global_index, num_neighbors);
316 #ifndef NDEBUG
317  graph_size += num_neighbors;
318 #endif
319  }
320 
321  csr_graph.prepare_for_use();
322 
323  // (2) second pass - fill the compressed adjacency array
324  it = beg;
325 
326  for (; it != end; ++it)
327  {
328  const Elem * elem = *it;
329 
330  const dof_id_type elem_global_index =
331  global_index_map[elem->id()];
332 
333  unsigned int connection=0;
334 
335  // Loop over the element's neighbors. An element
336  // adjacency corresponds to a face neighbor
337  for (unsigned int ms=0; ms<elem->n_neighbors(); ms++)
338  {
339  const Elem * neighbor = elem->neighbor_ptr(ms);
340 
341  if (neighbor != libmesh_nullptr)
342  {
343  // If the neighbor is not in the range of elements
344  // being partitioned, treat it as a NULL neighbor.
345  //
346  // Note: vectormap::find() is a little weird, it
347  // returns lower_bound, so checking its return
348  // value against end() is not useful. Therefore,
349  // we vall vectormap::count() instead!
350  if (!global_index_map.count(neighbor->id()))
351  continue;
352 
353  // If the neighbor is active treat it
354  // as a connection
355  if (neighbor->active())
356  csr_graph(elem_global_index, connection++) = global_index_map[neighbor->id()];
357 
358 #ifdef LIBMESH_ENABLE_AMR
359 
360  // Otherwise we need to find all of the
361  // neighbor's children that are connected to
362  // us and add them
363  else
364  {
365  // The side of the neighbor to which
366  // we are connected
367  const unsigned int ns =
368  neighbor->which_neighbor_am_i (elem);
369  libmesh_assert_less (ns, neighbor->n_neighbors());
370 
371  // Get all the active children (& grandchildren, etc...)
372  // of the neighbor.
373  neighbor->active_family_tree (neighbors_offspring);
374 
375  // Get all the neighbor's children that
376  // live on that side and are thus connected
377  // to us
378  for (std::size_t nc=0; nc<neighbors_offspring.size(); nc++)
379  {
380  const Elem * child =
381  neighbors_offspring[nc];
382 
383  // Skip neighbor offspring which are not in the range of elements being partitioned.
384  if (!global_index_map.count(child->id()))
385  continue;
386 
387  // This does not assume a level-1 mesh.
388  // Note that since children have sides numbered
389  // coincident with the parent then this is a sufficient test.
390  if (child->neighbor_ptr(ns) == elem)
391  {
392  libmesh_assert (child->active());
393 
394  csr_graph(elem_global_index, connection++) = global_index_map[child->id()];
395  }
396  }
397  }
398 
399 #endif /* ifdef LIBMESH_ENABLE_AMR */
400 
401  }
402  }
403 
404  if ((elem->dim() < LIBMESH_DIM) &&
405  elem->interior_parent())
406  {
407  // get all relevant interior elements
408  std::set<const Elem *> neighbor_set;
409  elem->find_interior_neighbors(neighbor_set);
410 
411  std::set<const Elem *>::iterator n_it = neighbor_set.begin();
412  for (; n_it != neighbor_set.end(); ++n_it)
413  {
414  const Elem * neighbor = *n_it;
415 
416  // Not all interior neighbors are necessarily in
417  // the same Mesh (hence not in the global_index_map).
418  // This will be the case when partitioning a
419  // BoundaryMesh, whose elements all have
420  // interior_parents() that belong to some other
421  // Mesh.
422  const Elem * queried_elem = mesh.query_elem_ptr(neighbor->id());
423 
424  // Compare the neighbor and the queried_elem
425  // pointers, make sure they are the same.
426  if (queried_elem && queried_elem == neighbor)
427  {
429  global_index_map.find(neighbor->id());
430 
431  // If the interior_neighbor is in the Mesh but
432  // not in the global_index_map, we have other issues.
433  if (global_index_map_it == global_index_map.end())
434  libmesh_error_msg("Interior neighbor with id " << neighbor->id() << " not found in global_index_map.");
435 
436  else
437  csr_graph(elem_global_index, connection++) = global_index_map_it->second;
438  }
439  }
440  }
441 
442  // Check for any boundary neighbors
443  typedef map_type::iterator map_it_type;
444  std::pair<map_it_type, map_it_type>
445  bounds = interior_to_boundary_map.equal_range(elem);
446 
447  for (map_it_type it = bounds.first; it != bounds.second; ++it)
448  {
449  const Elem * neighbor = it->second;
450  csr_graph(elem_global_index, connection++) =
451  global_index_map[neighbor->id()];
452  }
453  }
454 
455  // We create a non-empty vals for a disconnected graph, to
456  // work around a segfault from METIS.
457  libmesh_assert_equal_to (csr_graph.vals.size(),
458  std::max(graph_size, std::size_t(1)));
459  } // done building the graph
460 
461  Metis::idx_t ncon = 1;
462 
463  // Select which type of partitioning to create
464 
465  // Use recursive if the number of partitions is less than or equal to 8
466  if (n_pieces <= 8)
467  Metis::METIS_PartGraphRecursive(&n,
468  &ncon,
469  &csr_graph.offsets[0],
470  &csr_graph.vals[0],
471  &vwgt[0],
474  &nparts,
478  &edgecut,
479  &part[0]);
480 
481  // Otherwise use kway
482  else
483  Metis::METIS_PartGraphKway(&n,
484  &ncon,
485  &csr_graph.offsets[0],
486  &csr_graph.vals[0],
487  &vwgt[0],
490  &nparts,
494  &edgecut,
495  &part[0]);
496 
497  } // end processor 0 part
498 
499  // Broadcase the resutling partition
500  mesh.comm().broadcast(part);
501 
502  // Assign the returned processor ids. The part array contains
503  // the processor id for each active element, but in terms of
504  // the contiguous indexing we defined above
505  {
506  MeshBase::element_iterator it = beg;
507  for (; it!=end; ++it)
508  {
509  Elem * elem = *it;
510 
511  libmesh_assert (global_index_map.count(elem->id()));
512 
513  const dof_id_type elem_global_index =
514  global_index_map[elem->id()];
515 
516  libmesh_assert_less (elem_global_index, part.size());
517  const processor_id_type elem_procid =
518  static_cast<processor_id_type>(part[elem_global_index]);
519 
520  elem->processor_id() = elem_procid;
521  }
522  }
523 #endif
524 }
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
const class libmesh_nullptr_t libmesh_nullptr
IterBase * end
long double max(long double a, double b)
libmesh_assert(j)
ErrorVector * _weights
Definition: partitioner.h:213
BoundingBox bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:358
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Definition: partitioner.C:159
OStreamProxy err(std::cerr)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh)
staticinherited

These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0.

Definition at line 175 of file partitioner.C.

References libMesh::ParallelObject::n_processors().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

176 {
178 }
MeshBase & mesh
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:175
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh,
const unsigned int  n 
)
staticinherited

Definition at line 182 of file partitioner.C.

References libMesh::MeshTools::bounding_box(), libMesh::ParallelObject::comm(), end, libMesh::MeshCommunication::find_global_indices(), libMesh::MeshTools::n_elem(), libMesh::ParallelObject::n_processors(), libMesh::DofObject::processor_id(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

184 {
185  MeshBase::element_iterator it = mesh.unpartitioned_elements_begin();
186  const MeshBase::element_iterator end = mesh.unpartitioned_elements_end();
187 
188  const dof_id_type n_unpartitioned_elements = MeshTools::n_elem (it, end);
189 
190  // the unpartitioned elements must exist on all processors. If the range is empty on one
191  // it is empty on all, and we can quit right here.
192  if (!n_unpartitioned_elements) return;
193 
194  // find the target subdomain sizes
195  std::vector<dof_id_type> subdomain_bounds(mesh.n_processors());
196 
197  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
198  {
199  dof_id_type tgt_subdomain_size = 0;
200 
201  // watch out for the case that n_subdomains < n_processors
202  if (pid < n_subdomains)
203  {
204  tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
205 
206  if (pid < n_unpartitioned_elements%n_subdomains)
207  tgt_subdomain_size++;
208 
209  }
210 
211  //libMesh::out << "pid, #= " << pid << ", " << tgt_subdomain_size << std::endl;
212  if (pid == 0)
213  subdomain_bounds[0] = tgt_subdomain_size;
214  else
215  subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
216  }
217 
218  libmesh_assert_equal_to (subdomain_bounds.back(), n_unpartitioned_elements);
219 
220  // create the unique mapping for all unpartitioned elements independent of partitioning
221  // determine the global indexing for all the unpartitoned elements
222  std::vector<dof_id_type> global_indices;
223 
224  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.
225  // Only the indices for the elements we pass in are returned in the array.
226  MeshCommunication().find_global_indices (mesh.comm(),
228  global_indices);
229 
230  for (dof_id_type cnt=0; it != end; ++it)
231  {
232  Elem * elem = *it;
233 
234  libmesh_assert_less (cnt, global_indices.size());
235  const dof_id_type global_index =
236  global_indices[cnt++];
237 
238  libmesh_assert_less (global_index, subdomain_bounds.back());
239  libmesh_assert_less (global_index, n_unpartitioned_elements);
240 
241  const processor_id_type subdomain_id =
242  cast_int<processor_id_type>
243  (std::distance(subdomain_bounds.begin(),
244  std::upper_bound(subdomain_bounds.begin(),
245  subdomain_bounds.end(),
246  global_index)));
247  libmesh_assert_less (subdomain_id, n_subdomains);
248 
249  elem->processor_id() = subdomain_id;
250  //libMesh::out << "assigning " << global_index << " to " << subdomain_id << std::endl;
251  }
252 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:631
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
BoundingBox bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:358
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh,
const unsigned int  n 
)
inherited

Repartitions the MeshBase into n parts. (Some partitoning algorithms can repartition more efficiently than computing a new partitioning from scratch.) The default behavior is to simply call this->partition(mesh,n).

Definition at line 116 of file partitioner.C.

References libMesh::Partitioner::_do_repartition(), std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), and libMesh::Partitioner::single_partition().

Referenced by libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

118 {
119  // we cannot partition into more pieces than we have
120  // active elements!
121  const unsigned int n_parts =
122  static_cast<unsigned int>
123  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
124 
125  // Set the number of partitions in the mesh
126  mesh.set_n_partitions()=n_parts;
127 
128  if (n_parts == 1)
129  {
130  this->single_partition (mesh);
131  return;
132  }
133 
134  // First assign a temporary partitioning to any unpartitioned elements
136 
137  // Call the partitioning function
138  this->_do_repartition(mesh,n_parts);
139 
140  // Set the parent's processor ids
142 
143  // Set the node's processor ids
145 }
void single_partition(MeshBase &mesh)
Definition: partitioner.C:151
static void set_node_processor_ids(MeshBase &mesh)
Definition: partitioner.C:431
MeshBase & mesh
virtual void _do_repartition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.h:201
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh)
inherited

Repartitions the MeshBase into mesh.n_processors() parts. This is required since some partitoning algorithms can repartition more efficiently than computing a new partitioning from scratch.

Definition at line 109 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::repartition().

110 {
111  this->repartition(mesh,mesh.n_processors());
112 }
MeshBase & mesh
void repartition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.C:116
void libMesh::Partitioner::set_node_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the nodes. By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 431 of file partitioner.C.

References libMesh::MeshBase::active_elements_begin(), libMesh::MeshBase::active_elements_end(), libMesh::ParallelObject::comm(), libMesh::DofObject::id(), libMesh::DofObject::invalid_processor_id, libMesh::DofObject::invalidate_processor_id(), libMesh::libmesh_assert(), mesh, std::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::Elem::node_ptr(), libMesh::MeshBase::node_ref(), libMesh::MeshBase::nodes_begin(), libMesh::MeshBase::nodes_end(), libMesh::MeshBase::not_active_elements_begin(), libMesh::MeshBase::not_active_elements_end(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::Parallel::Communicator::send_receive(), libMesh::MeshBase::subactive_elements_begin(), libMesh::MeshBase::subactive_elements_end(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::UnstructuredMesh::all_first_order(), libMesh::Partitioner::partition(), libMesh::MeshBase::partition(), libMesh::Partitioner::partition_range(), libMesh::XdrIO::read(), libMesh::Partitioner::repartition(), and libMesh::BoundaryInfo::sync().

432 {
433  LOG_SCOPE("set_node_processor_ids()","Partitioner");
434 
435  // This function must be run on all processors at once
436  libmesh_parallel_only(mesh.comm());
437 
438  // If we have any unpartitioned elements at this
439  // stage there is a problem
440  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
441  mesh.unpartitioned_elements_end()) == 0);
442 
443 
444  // const dof_id_type orig_n_local_nodes = mesh.n_local_nodes();
445 
446  // libMesh::err << "[" << mesh.processor_id() << "]: orig_n_local_nodes="
447  // << orig_n_local_nodes << std::endl;
448 
449  // Build up request sets. Each node is currently owned by a processor because
450  // it is connected to an element owned by that processor. However, during the
451  // repartitioning phase that element may have been assigned a new processor id, but
452  // it is still resident on the original processor. We need to know where to look
453  // for new ids before assigning new ids, otherwise we may be asking the wrong processors
454  // for the wrong information.
455  //
456  // The only remaining issue is what to do with unpartitioned nodes. Since they are required
457  // to live on all processors we can simply rely on ourselves to number them properly.
458  std::vector<std::vector<dof_id_type> >
459  requested_node_ids(mesh.n_processors());
460 
461  // Loop over all the nodes, count the ones on each processor. We can skip ourself
462  std::vector<dof_id_type> ghost_nodes_from_proc(mesh.n_processors(), 0);
463 
464  MeshBase::node_iterator node_it = mesh.nodes_begin();
465  const MeshBase::node_iterator node_end = mesh.nodes_end();
466 
467  for (; node_it != node_end; ++node_it)
468  {
469  Node * node = *node_it;
470  libmesh_assert(node);
471  const processor_id_type current_pid = node->processor_id();
472  if (current_pid != mesh.processor_id() &&
473  current_pid != DofObject::invalid_processor_id)
474  {
475  libmesh_assert_less (current_pid, ghost_nodes_from_proc.size());
476  ghost_nodes_from_proc[current_pid]++;
477  }
478  }
479 
480  // We know how many objects live on each processor, so reserve()
481  // space for each.
482  for (processor_id_type pid=0; pid != mesh.n_processors(); ++pid)
483  requested_node_ids[pid].reserve(ghost_nodes_from_proc[pid]);
484 
485  // We need to get the new pid for each node from the processor
486  // which *currently* owns the node. We can safely skip ourself
487  for (node_it = mesh.nodes_begin(); node_it != node_end; ++node_it)
488  {
489  Node * node = *node_it;
490  libmesh_assert(node);
491  const processor_id_type current_pid = node->processor_id();
492  if (current_pid != mesh.processor_id() &&
493  current_pid != DofObject::invalid_processor_id)
494  {
495  libmesh_assert_less (current_pid, requested_node_ids.size());
496  libmesh_assert_less (requested_node_ids[current_pid].size(),
497  ghost_nodes_from_proc[current_pid]);
498  requested_node_ids[current_pid].push_back(node->id());
499  }
500 
501  // Unset any previously-set node processor ids
502  node->invalidate_processor_id();
503  }
504 
505  // Loop over all the active elements
506  MeshBase::element_iterator elem_it = mesh.active_elements_begin();
507  const MeshBase::element_iterator elem_end = mesh.active_elements_end();
508 
509  for ( ; elem_it != elem_end; ++elem_it)
510  {
511  Elem * elem = *elem_it;
512  libmesh_assert(elem);
513 
514  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
515 
516  // For each node, set the processor ID to the min of
517  // its current value and this Element's processor id.
518  //
519  // TODO: we would probably get better parallel partitioning if
520  // we did something like "min for even numbered nodes, max for
521  // odd numbered". We'd need to be careful about how that would
522  // affect solution ordering for I/O, though.
523  for (unsigned int n=0; n<elem->n_nodes(); ++n)
524  elem->node_ptr(n)->processor_id() = std::min(elem->node_ptr(n)->processor_id(),
525  elem->processor_id());
526  }
527 
528  // And loop over the subactive elements, but don't reassign
529  // nodes that are already active on another processor.
530  MeshBase::element_iterator sub_it = mesh.subactive_elements_begin();
531  const MeshBase::element_iterator sub_end = mesh.subactive_elements_end();
532 
533  for ( ; sub_it != sub_end; ++sub_it)
534  {
535  Elem * elem = *sub_it;
536  libmesh_assert(elem);
537 
538  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
539 
540  for (unsigned int n=0; n<elem->n_nodes(); ++n)
541  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
542  elem->node_ptr(n)->processor_id() = elem->processor_id();
543  }
544 
545  // Same for the inactive elements -- we will have already gotten most of these
546  // nodes, *except* for the case of a parent with a subset of children which are
547  // ghost elements. In that case some of the parent nodes will not have been
548  // properly handled yet
549  MeshBase::element_iterator not_it = mesh.not_active_elements_begin();
550  const MeshBase::element_iterator not_end = mesh.not_active_elements_end();
551 
552  for ( ; not_it != not_end; ++not_it)
553  {
554  Elem * elem = *not_it;
555  libmesh_assert(elem);
556 
557  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
558 
559  for (unsigned int n=0; n<elem->n_nodes(); ++n)
560  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
561  elem->node_ptr(n)->processor_id() = elem->processor_id();
562  }
563 
564  // We can't assert that all nodes are connected to elements, because
565  // a DistributedMesh with NodeConstraints might have pulled in some
566  // remote nodes solely for evaluating those constraints.
567  // MeshTools::libmesh_assert_connected_nodes(mesh);
568 
569  // For such nodes, we'll do a sanity check later when making sure
570  // that we successfully reset their processor ids to something
571  // valid.
572 
573  // Next set node ids from other processors, excluding self
574  for (processor_id_type p=1; p != mesh.n_processors(); ++p)
575  {
576  // Trade my requests with processor procup and procdown
577  processor_id_type procup = cast_int<processor_id_type>
578  ((mesh.processor_id() + p) % mesh.n_processors());
579  processor_id_type procdown = cast_int<processor_id_type>
580  ((mesh.n_processors() + mesh.processor_id() - p) %
581  mesh.n_processors());
582  std::vector<dof_id_type> request_to_fill;
583  mesh.comm().send_receive(procup, requested_node_ids[procup],
584  procdown, request_to_fill);
585 
586  // Fill those requests in-place
587  for (std::size_t i=0; i != request_to_fill.size(); ++i)
588  {
589  Node & node = mesh.node_ref(request_to_fill[i]);
590  const processor_id_type new_pid = node.processor_id();
591 
592  // We may have an invalid processor_id() on nodes that have been
593  // "detatched" from coarsened-away elements but that have not yet
594  // themselves been removed.
595  // libmesh_assert_not_equal_to (new_pid, DofObject::invalid_processor_id);
596  // libmesh_assert_less (new_pid, mesh.n_partitions()); // this is the correct test --
597  request_to_fill[i] = new_pid; // the number of partitions may
598  } // not equal the number of processors
599 
600  // Trade back the results
601  std::vector<dof_id_type> filled_request;
602  mesh.comm().send_receive(procdown, request_to_fill,
603  procup, filled_request);
604  libmesh_assert_equal_to (filled_request.size(), requested_node_ids[procup].size());
605 
606  // And copy the id changes we've now been informed of
607  for (std::size_t i=0; i != filled_request.size(); ++i)
608  {
609  Node & node = mesh.node_ref(requested_node_ids[procup][i]);
610 
611  // this is the correct test -- the number of partitions may
612  // not equal the number of processors
613 
614  // But: we may have an invalid processor_id() on nodes that
615  // have been "detatched" from coarsened-away elements but
616  // that have not yet themselves been removed.
617  // libmesh_assert_less (filled_request[i], mesh.n_partitions());
618 
619  node.processor_id(cast_int<processor_id_type>(filled_request[i]));
620  }
621  }
622 
623 #ifdef DEBUG
624  MeshTools::libmesh_assert_valid_procids<Node>(mesh);
625 #endif
626 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:631
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
libmesh_assert(j)
static const processor_id_type invalid_processor_id
Definition: dof_object.h:346
long double min(long double a, double b)
void libMesh::Partitioner::set_parent_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the inactive parent elements. A parent's processor ID is the same as its first child.

Definition at line 256 of file partitioner.C.

References libMesh::MeshBase::active_elements_begin(), libMesh::MeshBase::active_elements_end(), libMesh::Elem::active_family_tree(), libMesh::MeshBase::ancestor_elements_begin(), libMesh::MeshBase::ancestor_elements_end(), libMesh::Elem::child_ptr(), libMesh::ParallelObject::comm(), libMesh::Partitioner::communication_blocksize, end, libMesh::DofObject::id(), libMesh::DofObject::invalid_processor_id, libMesh::DofObject::invalidate_processor_id(), libMesh::Elem::is_remote(), libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::MeshBase::max_elem_id(), std::min(), libMesh::Parallel::Communicator::min(), libMesh::Elem::n_children(), libMesh::MeshTools::n_elem(), libMesh::Elem::parent(), libMesh::processor_id(), libMesh::DofObject::processor_id(), libMesh::Elem::total_family_tree(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

257 {
258  // Ignore the parameter when !LIBMESH_ENABLE_AMR
260 
261  LOG_SCOPE("set_parent_processor_ids()", "Partitioner");
262 
263 #ifdef LIBMESH_ENABLE_AMR
264 
265  // If the mesh is serial we have access to all the elements,
266  // in particular all the active ones. We can therefore set
267  // the parent processor ids indirecly through their children, and
268  // set the subactive processor ids while examining their active
269  // ancestors.
270  // By convention a parent is assigned to the minimum processor
271  // of all its children, and a subactive is assigned to the processor
272  // of its active ancestor.
273  if (mesh.is_serial())
274  {
275  // Loop over all the active elements in the mesh
276  MeshBase::element_iterator it = mesh.active_elements_begin();
277  const MeshBase::element_iterator end = mesh.active_elements_end();
278 
279  for ( ; it!=end; ++it)
280  {
281  Elem * child = *it;
282 
283  // First set descendents
284 
285  std::vector<const Elem *> subactive_family;
286  child->total_family_tree(subactive_family);
287  for (std::size_t i = 0; i != subactive_family.size(); ++i)
288  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
289 
290  // Then set ancestors
291 
292  Elem * parent = child->parent();
293 
294  while (parent)
295  {
296  // invalidate the parent id, otherwise the min below
297  // will not work if the current parent id is less
298  // than all the children!
299  parent->invalidate_processor_id();
300 
301  for (unsigned int c=0; c<parent->n_children(); c++)
302  {
303  child = parent->child_ptr(c);
304  libmesh_assert(child);
305  libmesh_assert(!child->is_remote());
306  libmesh_assert_not_equal_to (child->processor_id(), DofObject::invalid_processor_id);
307  parent->processor_id() = std::min(parent->processor_id(),
308  child->processor_id());
309  }
310  parent = parent->parent();
311  }
312  }
313  }
314 
315  // When the mesh is parallel we cannot guarantee that parents have access to
316  // all their children.
317  else
318  {
319  // Setting subactive processor ids is easy: we can guarantee
320  // that children have access to all their parents.
321 
322  // Loop over all the active elements in the mesh
323  MeshBase::element_iterator it = mesh.active_elements_begin();
324  const MeshBase::element_iterator end = mesh.active_elements_end();
325 
326  for ( ; it!=end; ++it)
327  {
328  Elem * child = *it;
329 
330  std::vector<const Elem *> subactive_family;
331  child->total_family_tree(subactive_family);
332  for (std::size_t i = 0; i != subactive_family.size(); ++i)
333  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
334  }
335 
336  // When the mesh is parallel we cannot guarantee that parents have access to
337  // all their children.
338 
339  // We will use a brute-force approach here. Each processor finds its parent
340  // elements and sets the parent pid to the minimum of its
341  // semilocal descendants.
342  // A global reduction is then performed to make sure the true minimum is found.
343  // As noted, this is required because we cannot guarantee that a parent has
344  // access to all its children on any single processor.
345  libmesh_parallel_only(mesh.comm());
346  libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
347  mesh.unpartitioned_elements_end()) == 0);
348 
349  const dof_id_type max_elem_id = mesh.max_elem_id();
350 
351  std::vector<processor_id_type>
352  parent_processor_ids (std::min(communication_blocksize,
353  max_elem_id));
354 
355  for (dof_id_type blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
356  {
357  last_elem_id =
358  std::min(static_cast<dof_id_type>((blk+1)*communication_blocksize),
359  max_elem_id);
360  const dof_id_type first_elem_id = blk*communication_blocksize;
361 
362  std::fill (parent_processor_ids.begin(),
363  parent_processor_ids.end(),
365 
366  // first build up local contributions to parent_processor_ids
367  MeshBase::element_iterator not_it = mesh.ancestor_elements_begin();
368  const MeshBase::element_iterator not_end = mesh.ancestor_elements_end();
369 
370  bool have_parent_in_block = false;
371 
372  for ( ; not_it != not_end; ++not_it)
373  {
374  Elem * parent = *not_it;
375 
376  const dof_id_type parent_idx = parent->id();
377  libmesh_assert_less (parent_idx, max_elem_id);
378 
379  if ((parent_idx >= first_elem_id) &&
380  (parent_idx < last_elem_id))
381  {
382  have_parent_in_block = true;
384 
385  std::vector<const Elem *> active_family;
386  parent->active_family_tree(active_family);
387  for (std::size_t i = 0; i != active_family.size(); ++i)
388  parent_pid = std::min (parent_pid, active_family[i]->processor_id());
389 
390  const dof_id_type packed_idx = parent_idx - first_elem_id;
391  libmesh_assert_less (packed_idx, parent_processor_ids.size());
392 
393  parent_processor_ids[packed_idx] = parent_pid;
394  }
395  }
396 
397  // then find the global minimum
398  mesh.comm().min (parent_processor_ids);
399 
400  // and assign the ids, if we have a parent in this block.
401  if (have_parent_in_block)
402  for (not_it = mesh.ancestor_elements_begin();
403  not_it != not_end; ++not_it)
404  {
405  Elem * parent = *not_it;
406 
407  const dof_id_type parent_idx = parent->id();
408 
409  if ((parent_idx >= first_elem_id) &&
410  (parent_idx < last_elem_id))
411  {
412  const dof_id_type packed_idx = parent_idx - first_elem_id;
413  libmesh_assert_less (packed_idx, parent_processor_ids.size());
414 
415  const processor_id_type parent_pid =
416  parent_processor_ids[packed_idx];
417 
418  libmesh_assert_not_equal_to (parent_pid, DofObject::invalid_processor_id);
419 
420  parent->processor_id() = parent_pid;
421  }
422  }
423  }
424  }
425 
426 #endif // LIBMESH_ENABLE_AMR
427 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:631
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
libmesh_assert(j)
static const processor_id_type invalid_processor_id
Definition: dof_object.h:346
void libmesh_ignore(const T &)
static const dof_id_type communication_blocksize
Definition: partitioner.h:208
processor_id_type processor_id()
Definition: libmesh_base.h:96
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::single_partition ( MeshBase mesh)
protectedinherited

Trivially "partitions" the mesh for one processor. Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Definition at line 151 of file partitioner.C.

References libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Partitioner::single_partition_range().

Referenced by libMesh::SubdomainPartitioner::_do_partition(), libMesh::Partitioner::attach_weights(), libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

152 {
153  this->single_partition_range(mesh.elements_begin(),
154  mesh.elements_end());
155 }
MeshBase & mesh
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Definition: partitioner.C:159
void libMesh::Partitioner::single_partition_range ( MeshBase::element_iterator  it,
MeshBase::element_iterator  end 
)
protectedinherited

Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end).

Definition at line 159 of file partitioner.C.

References end, libMesh::Elem::n_nodes(), libMesh::Elem::node_ptr(), and libMesh::DofObject::processor_id().

Referenced by libMesh::Partitioner::attach_weights(), libMesh::LinearPartitioner::partition_range(), libMesh::MappedSubdomainPartitioner::partition_range(), libMesh::CentroidPartitioner::partition_range(), and libMesh::Partitioner::single_partition().

161 {
162  LOG_SCOPE("single_partition_range()", "Partitioner");
163 
164  for ( ; it != end; ++it)
165  {
166  Elem * elem = *it;
167  elem->processor_id() = 0;
168 
169  // Assign all this element's nodes to processor 0 as well.
170  for (unsigned int n=0; n<elem->n_nodes(); ++n)
171  elem->node_ptr(n)->processor_id() = 0;
172  }
173 }
IterBase * end

Member Data Documentation

ErrorVector* libMesh::Partitioner::_weights
protectedinherited

The weights that might be used for partitioning.

Definition at line 213 of file partitioner.h.

Referenced by attach_weights().

const dof_id_type libMesh::Partitioner::communication_blocksize = 1000000
staticprotectedinherited

The blocksize to use when doing blocked parallel communication. This limits the maximum vector size which can be used in a single communication step.

Definition at line 208 of file partitioner.h.

Referenced by libMesh::Partitioner::set_parent_processor_ids().


The documentation for this class was generated from the following files: