libMesh::MetisPartitioner Class Reference

Partitioner which interfaces with the METIS library. More...

#include <metis_partitioner.h>

Inheritance diagram for libMesh::MetisPartitioner:

Public Member Functions

 MetisPartitioner ()
 
virtual std::unique_ptr< Partitionerclone () const libmesh_override
 
virtual void attach_weights (ErrorVector *weights) libmesh_override
 
virtual void partition_range (MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) libmesh_override
 
virtual void partition (MeshBase &mesh, const unsigned int n)
 
virtual void partition (MeshBase &mesh)
 
void repartition (MeshBase &mesh, const unsigned int n)
 
void repartition (MeshBase &mesh)
 

Static Public Member Functions

static void partition_unpartitioned_elements (MeshBase &mesh)
 
static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n)
 
static void set_parent_processor_ids (MeshBase &mesh)
 
static void set_node_processor_ids (MeshBase &mesh)
 

Protected Member Functions

virtual void _do_partition (MeshBase &mesh, const unsigned int n) libmesh_override
 
void single_partition (MeshBase &mesh)
 
void single_partition_range (MeshBase::element_iterator it, MeshBase::element_iterator end)
 
virtual void _do_repartition (MeshBase &mesh, const unsigned int n)
 

Protected Attributes

ErrorVector_weights
 

Static Protected Attributes

static const dof_id_type communication_blocksize = 1000000
 

Detailed Description

Partitioner which interfaces with the METIS library.

The MetisPartitioner uses the Metis graph partitioner to partition the elements.

Author
Benjamin S. Kirk
Date
2003

Definition at line 38 of file metis_partitioner.h.

Constructor & Destructor Documentation

libMesh::MetisPartitioner::MetisPartitioner ( )
inline

Constructor.

Definition at line 45 of file metis_partitioner.h.

45 {}

Member Function Documentation

void libMesh::MetisPartitioner::_do_partition ( MeshBase mesh,
const unsigned int  n 
)
protectedvirtual

Partition the MeshBase into n subdomains.

Implements libMesh::Partitioner.

Definition at line 513 of file metis_partitioner.C.

References libMesh::MeshBase::active_elements_begin(), and libMesh::MeshBase::active_elements_end().

Referenced by attach_weights().

515 {
516  this->partition_range(mesh,
517  mesh.active_elements_begin(),
518  mesh.active_elements_end(),
519  n_pieces);
520 }
MeshBase & mesh
virtual void partition_range(MeshBase &mesh, MeshBase::element_iterator it, MeshBase::element_iterator end, const unsigned int n) libmesh_override
virtual void libMesh::Partitioner::_do_repartition ( MeshBase mesh,
const unsigned int  n 
)
inlineprotectedvirtualinherited

This is the actual re-partitioning method which can be overridden in derived classes.

Note
The default behavior is to simply call the partition function.

Reimplemented in libMesh::ParmetisPartitioner.

Definition at line 204 of file partitioner.h.

References libMesh::Partitioner::_do_partition().

Referenced by libMesh::Partitioner::repartition().

205  { this->_do_partition (mesh, n); }
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
virtual void libMesh::MetisPartitioner::attach_weights ( ErrorVector )
inlinevirtual

Attach weights that can be used for partitioning. This ErrorVector should be exactly the same on every processor and should have mesh->max_elem_id() entries.

Reimplemented from libMesh::Partitioner.

Definition at line 55 of file metis_partitioner.h.

References _do_partition(), libMesh::Partitioner::_weights, end, mesh, and partition_range().

55 { _weights = weights; }
ErrorVector * _weights
Definition: partitioner.h:216
virtual std::unique_ptr<Partitioner> libMesh::MetisPartitioner::clone ( ) const
inlinevirtual
Returns
A copy of this partitioner wrapped in a smart pointer.

Implements libMesh::Partitioner.

Definition at line 50 of file metis_partitioner.h.

51  {
52  return libmesh_make_unique<MetisPartitioner>();
53  }
void libMesh::Partitioner::partition ( MeshBase mesh,
const unsigned int  n 
)
virtualinherited

Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 49 of file partitioner.C.

References libMesh::Partitioner::_do_partition(), libMesh::ParallelObject::comm(), libMesh::MeshTools::libmesh_assert_valid_remote_elems(), mesh, std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::redistribute(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), libMesh::Partitioner::single_partition(), and libMesh::MeshBase::update_post_partitioning().

Referenced by libMesh::ParmetisPartitioner::_do_repartition(), libMesh::Partitioner::partition(), and libMesh::Partitioner::~Partitioner().

51 {
52  libmesh_parallel_only(mesh.comm());
53 
54  // BSK - temporary fix while redistribution is integrated 6/26/2008
55  // Uncomment this to not repartition in parallel
56  // if (!mesh.is_serial())
57  // return;
58 
59  // we cannot partition into more pieces than we have
60  // active elements!
61  const unsigned int n_parts =
62  static_cast<unsigned int>
63  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
64 
65  // Set the number of partitions in the mesh
66  mesh.set_n_partitions()=n_parts;
67 
68  if (n_parts == 1)
69  {
70  this->single_partition (mesh);
71  return;
72  }
73 
74  // First assign a temporary partitioning to any unpartitioned elements
76 
77  // Call the partitioning function
78  this->_do_partition(mesh,n_parts);
79 
80  // Set the parent's processor ids
82 
83  // Redistribute elements if necessary, before setting node processor
84  // ids, to make sure those will be set consistently
85  mesh.redistribute();
86 
87 #ifdef DEBUG
89 
90  // Messed up elem processor_id()s can leave us without the child
91  // elements we need to restrict vectors on a distributed mesh
92  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
93 #endif
94 
95  // Set the node's processor ids
97 
98 #ifdef DEBUG
99  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
100 #endif
101 
102  // Give derived Mesh classes a chance to update any cached data to
103  // reflect the new partitioning
104  mesh.update_post_partitioning();
105 }
void single_partition(MeshBase &mesh)
Definition: partitioner.C:151
void libmesh_assert_valid_remote_elems(const MeshBase &mesh)
Definition: mesh_tools.C:1037
static void set_node_processor_ids(MeshBase &mesh)
Definition: partitioner.C:416
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::partition ( MeshBase mesh)
virtualinherited

Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 42 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::partition().

43 {
44  this->partition(mesh,mesh.n_processors());
45 }
MeshBase & mesh
virtual void partition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.C:49
void libMesh::MetisPartitioner::partition_range ( MeshBase mesh,
MeshBase::element_iterator  it,
MeshBase::element_iterator  end,
const unsigned int  n 
)
virtual

Called by the SubdomainPartitioner to partition elements in the range (it, end).

Reimplemented from libMesh::Partitioner.

Definition at line 56 of file metis_partitioner.C.

References libMesh::Elem::active(), libMesh::as_range(), libMesh::Parallel::Communicator::broadcast(), libMesh::ParallelObject::comm(), libMesh::vectormap< Key, Tp >::count(), libMesh::MeshTools::create_bounding_box(), libMesh::Elem::dim(), end, libMesh::err, libMesh::vectormap< Key, Tp >::find(), libMesh::MeshCommunication::find_global_indices(), libMesh::Elem::find_interior_neighbors(), libMesh::DofObject::id(), libMesh::vectormap< Key, Tp >::insert(), libMesh::Elem::interior_parent(), libMesh::MeshBase::is_serial(), libmesh_nullptr, std::max(), libMesh::Elem::n_nodes(), libMesh::Elem::neighbor_ptr(), libMesh::Elem::neighbor_ptr_range(), libMesh::METIS_CSR_Graph< IndexType >::offsets, libMesh::SFCPartitioner::partition_range(), libMesh::METIS_CSR_Graph< IndexType >::prep_n_nonzeros(), libMesh::METIS_CSR_Graph< IndexType >::prepare_for_use(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::MeshBase::query_elem_ptr(), and libMesh::METIS_CSR_Graph< IndexType >::vals.

Referenced by attach_weights().

60 {
61  libmesh_assert_greater (n_pieces, 0);
62 
63  // We don't yet support distributed meshes with this Partitioner
64  if (!mesh.is_serial())
65  libmesh_not_implemented();
66 
67  // Check for an easy return
68  if (n_pieces == 1)
69  {
70  this->single_partition_range (beg, end);
71  return;
72  }
73 
74  // What to do if the Metis library IS NOT present
75 #ifndef LIBMESH_HAVE_METIS
76 
77  libmesh_here();
78  libMesh::err << "ERROR: The library has been built without" << std::endl
79  << "Metis support. Using a space-filling curve" << std::endl
80  << "partitioner instead!" << std::endl;
81 
82  SFCPartitioner sfcp;
83  sfcp.partition_range (mesh, beg, end, n_pieces);
84 
85  // What to do if the Metis library IS present
86 #else
87 
88  LOG_SCOPE("partition_range()", "MetisPartitioner");
89 
90  const dof_id_type n_range_elem = std::distance(beg, end);
91 
92  // Metis will only consider the elements in the range.
93  // We need to map the range element ids into a
94  // contiguous range. Further, we want the unique range indexing to be
95  // independent of the element ordering, otherwise a circular dependency
96  // can result in which the partitioning depends on the ordering which
97  // depends on the partitioning...
98  vectormap<dof_id_type, dof_id_type> global_index_map;
99  global_index_map.reserve (n_range_elem);
100 
101  {
102  std::vector<dof_id_type> global_index;
103 
104  MeshCommunication().find_global_indices (mesh.comm(),
106  beg, end, global_index);
107 
108  libmesh_assert_equal_to (global_index.size(), n_range_elem);
109 
110  MeshBase::element_iterator it = beg;
111  for (std::size_t cnt=0; it != end; ++it)
112  {
113  const Elem * elem = *it;
114 
115  global_index_map.insert (std::make_pair(elem->id(), global_index[cnt++]));
116  }
117  libmesh_assert_equal_to (global_index_map.size(), n_range_elem);
118  }
119 
120  // If we have boundary elements in this mesh, we want to account for
121  // the connectivity between them and interior elements. We can find
122  // interior elements from boundary elements, but we need to build up
123  // a lookup map to do the reverse.
124  typedef std::unordered_multimap<const Elem *, const Elem *> map_type;
125  map_type interior_to_boundary_map;
126 
127  {
128  MeshBase::element_iterator it = beg;
129  for (; it != end; ++it)
130  {
131  const Elem * elem = *it;
132 
133  // If we don't have an interior_parent then there's nothing
134  // to look us up.
135  if ((elem->dim() >= LIBMESH_DIM) ||
136  !elem->interior_parent())
137  continue;
138 
139  // get all relevant interior elements
140  std::set<const Elem *> neighbor_set;
141  elem->find_interior_neighbors(neighbor_set);
142 
143  std::set<const Elem *>::iterator n_it = neighbor_set.begin();
144  for (; n_it != neighbor_set.end(); ++n_it)
145  {
146  // FIXME - non-const versions of the std::set<const Elem
147  // *> returning methods would be nice
148  Elem * neighbor = const_cast<Elem *>(*n_it);
149 
150 #if defined(LIBMESH_HAVE_UNORDERED_MULTIMAP) || \
151  defined(LIBMESH_HAVE_TR1_UNORDERED_MULTIMAP) || \
152  defined(LIBMESH_HAVE_HASH_MULTIMAP) || \
153  defined(LIBMESH_HAVE_EXT_HASH_MULTIMAP)
154  interior_to_boundary_map.insert(std::make_pair(neighbor, elem));
155 #else
156  interior_to_boundary_map.insert(interior_to_boundary_map.begin(),
157  std::make_pair(neighbor, elem));
158 #endif
159  }
160  }
161  }
162 
163  // Data structure that Metis will fill up on processor 0 and broadcast.
164  std::vector<Metis::idx_t> part(n_range_elem);
165 
166  // Invoke METIS, but only on processor 0.
167  // Then broadcast the resulting decomposition
168  if (mesh.processor_id() == 0)
169  {
170  // Data structures and parameters needed only on processor 0 by Metis.
171  // std::vector<Metis::idx_t> options(5);
172  std::vector<Metis::idx_t> vwgt(n_range_elem);
173 
174  Metis::idx_t
175  n = static_cast<Metis::idx_t>(n_range_elem), // number of "nodes" (elements) in the graph
176  // wgtflag = 2, // weights on vertices only, none on edges
177  // numflag = 0, // C-style 0-based numbering
178  nparts = static_cast<Metis::idx_t>(n_pieces), // number of subdomains to create
179  edgecut = 0; // the numbers of edges cut by the resulting partition
180 
181  // Set the options
182  // options[0] = 0; // use default options
183 
184  // build the graph
185  METIS_CSR_Graph<Metis::idx_t> csr_graph;
186 
187  csr_graph.offsets.resize(n_range_elem + 1, 0);
188 
189  // Local scope for these
190  {
191  // build the graph in CSR format. Note that
192  // the edges in the graph will correspond to
193  // face neighbors
194 
195 #ifdef LIBMESH_ENABLE_AMR
196  std::vector<const Elem *> neighbors_offspring;
197 #endif
198 
199 #ifndef NDEBUG
200  std::size_t graph_size=0;
201 #endif
202 
203  // (1) first pass - get the row sizes for each element by counting the number
204  // of face neighbors. Also populate the vwght array if necessary
205  MeshBase::element_iterator it = beg;
206  for (; it != end; ++it)
207  {
208  const Elem * elem = *it;
209 
210  const dof_id_type elem_global_index =
211  global_index_map[elem->id()];
212 
213  libmesh_assert_less (elem_global_index, vwgt.size());
214 
215  // maybe there is a better weight?
216  // The weight is used to define what a balanced graph is
217  if (!_weights)
218  vwgt[elem_global_index] = elem->n_nodes();
219  else
220  vwgt[elem_global_index] = static_cast<Metis::idx_t>((*_weights)[elem->id()]);
221 
222  unsigned int num_neighbors = 0;
223 
224  // Loop over the element's neighbors. An element
225  // adjacency corresponds to a face neighbor
226  for (auto neighbor : elem->neighbor_ptr_range())
227  {
228  if (neighbor != libmesh_nullptr)
229  {
230  // If the neighbor is active, but is not in the
231  // range of elements being partitioned, treat it
232  // as a NULL neighbor.
233  if (neighbor->active() && !global_index_map.count(neighbor->id()))
234  continue;
235 
236  // If the neighbor is active treat it
237  // as a connection
238  if (neighbor->active())
239  num_neighbors++;
240 
241 #ifdef LIBMESH_ENABLE_AMR
242 
243  // Otherwise we need to find all of the
244  // neighbor's children that are connected to
245  // us and add them
246  else
247  {
248  // The side of the neighbor to which
249  // we are connected
250  const unsigned int ns =
251  neighbor->which_neighbor_am_i (elem);
252  libmesh_assert_less (ns, neighbor->n_neighbors());
253 
254  // Get all the active children (& grandchildren, etc...)
255  // of the neighbor.
256 
257  // FIXME - this is the wrong thing, since we
258  // should be getting the active family tree on
259  // our side only. But adding too many graph
260  // links may cause hanging nodes to tend to be
261  // on partition interiors, which would reduce
262  // communication overhead for constraint
263  // equations, so we'll leave it.
264  neighbor->active_family_tree (neighbors_offspring);
265 
266  // Get all the neighbor's children that
267  // live on that side and are thus connected
268  // to us
269  for (std::size_t nc=0; nc<neighbors_offspring.size(); nc++)
270  {
271  const Elem * child =
272  neighbors_offspring[nc];
273 
274  // Skip neighbor offspring which are not in the range of elements being partitioned.
275  if (!global_index_map.count(child->id()))
276  continue;
277 
278  // This does not assume a level-1 mesh.
279  // Note that since children have sides numbered
280  // coincident with the parent then this is a sufficient test.
281  if (child->neighbor_ptr(ns) == elem)
282  {
283  libmesh_assert (child->active());
284  num_neighbors++;
285  }
286  }
287  }
288 
289 #endif /* ifdef LIBMESH_ENABLE_AMR */
290 
291  }
292  }
293 
294  // Check for any interior neighbors
295  if ((elem->dim() < LIBMESH_DIM) && elem->interior_parent())
296  {
297  // get all relevant interior elements
298  std::set<const Elem *> neighbor_set;
299  elem->find_interior_neighbors(neighbor_set);
300 
301  num_neighbors += neighbor_set.size();
302  }
303 
304  // Check for any boundary neighbors
305  typedef map_type::iterator map_it_type;
306  std::pair<map_it_type, map_it_type>
307  bounds = interior_to_boundary_map.equal_range(elem);
308  num_neighbors += std::distance(bounds.first, bounds.second);
309 
310  csr_graph.prep_n_nonzeros(elem_global_index, num_neighbors);
311 #ifndef NDEBUG
312  graph_size += num_neighbors;
313 #endif
314  }
315 
316  csr_graph.prepare_for_use();
317 
318  // (2) second pass - fill the compressed adjacency array
319  it = beg;
320 
321  for (; it != end; ++it)
322  {
323  const Elem * elem = *it;
324 
325  const dof_id_type elem_global_index =
326  global_index_map[elem->id()];
327 
328  unsigned int connection=0;
329 
330  // Loop over the element's neighbors. An element
331  // adjacency corresponds to a face neighbor
332  for (auto neighbor : elem->neighbor_ptr_range())
333  {
334  if (neighbor != libmesh_nullptr)
335  {
336  // If the neighbor is active, but is not in the
337  // range of elements being partitioned, treat it
338  // as a NULL neighbor.
339  if (neighbor->active() && !global_index_map.count(neighbor->id()))
340  continue;
341 
342  // If the neighbor is active treat it
343  // as a connection
344  if (neighbor->active())
345  csr_graph(elem_global_index, connection++) = global_index_map[neighbor->id()];
346 
347 #ifdef LIBMESH_ENABLE_AMR
348 
349  // Otherwise we need to find all of the
350  // neighbor's children that are connected to
351  // us and add them
352  else
353  {
354  // The side of the neighbor to which
355  // we are connected
356  const unsigned int ns =
357  neighbor->which_neighbor_am_i (elem);
358  libmesh_assert_less (ns, neighbor->n_neighbors());
359 
360  // Get all the active children (& grandchildren, etc...)
361  // of the neighbor.
362  neighbor->active_family_tree (neighbors_offspring);
363 
364  // Get all the neighbor's children that
365  // live on that side and are thus connected
366  // to us
367  for (std::size_t nc=0; nc<neighbors_offspring.size(); nc++)
368  {
369  const Elem * child =
370  neighbors_offspring[nc];
371 
372  // Skip neighbor offspring which are not in the range of elements being partitioned.
373  if (!global_index_map.count(child->id()))
374  continue;
375 
376  // This does not assume a level-1 mesh.
377  // Note that since children have sides numbered
378  // coincident with the parent then this is a sufficient test.
379  if (child->neighbor_ptr(ns) == elem)
380  {
381  libmesh_assert (child->active());
382 
383  csr_graph(elem_global_index, connection++) = global_index_map[child->id()];
384  }
385  }
386  }
387 
388 #endif /* ifdef LIBMESH_ENABLE_AMR */
389 
390  }
391  }
392 
393  if ((elem->dim() < LIBMESH_DIM) &&
394  elem->interior_parent())
395  {
396  // get all relevant interior elements
397  std::set<const Elem *> neighbor_set;
398  elem->find_interior_neighbors(neighbor_set);
399 
400  std::set<const Elem *>::iterator n_it = neighbor_set.begin();
401  for (; n_it != neighbor_set.end(); ++n_it)
402  {
403  const Elem * neighbor = *n_it;
404 
405  // Not all interior neighbors are necessarily in
406  // the same Mesh (hence not in the global_index_map).
407  // This will be the case when partitioning a
408  // BoundaryMesh, whose elements all have
409  // interior_parents() that belong to some other
410  // Mesh.
411  const Elem * queried_elem = mesh.query_elem_ptr(neighbor->id());
412 
413  // Compare the neighbor and the queried_elem
414  // pointers, make sure they are the same.
415  if (queried_elem && queried_elem == neighbor)
416  {
418  global_index_map.find(neighbor->id());
419 
420  // If the interior_neighbor is in the Mesh but
421  // not in the global_index_map, we have other issues.
422  if (global_index_map_it == global_index_map.end())
423  libmesh_error_msg("Interior neighbor with id " << neighbor->id() << " not found in global_index_map.");
424 
425  else
426  csr_graph(elem_global_index, connection++) = global_index_map_it->second;
427  }
428  }
429  }
430 
431  // Check for any boundary neighbors
432  for (const auto & pr : as_range(interior_to_boundary_map.equal_range(elem)))
433  {
434  const Elem * neighbor = pr.second;
435  csr_graph(elem_global_index, connection++) =
436  global_index_map[neighbor->id()];
437  }
438  }
439 
440  // We create a non-empty vals for a disconnected graph, to
441  // work around a segfault from METIS.
442  libmesh_assert_equal_to (csr_graph.vals.size(),
443  std::max(graph_size, std::size_t(1)));
444  } // done building the graph
445 
446  Metis::idx_t ncon = 1;
447 
448  // Select which type of partitioning to create
449 
450  // Use recursive if the number of partitions is less than or equal to 8
451  if (n_pieces <= 8)
452  Metis::METIS_PartGraphRecursive(&n,
453  &ncon,
454  &csr_graph.offsets[0],
455  &csr_graph.vals[0],
456  &vwgt[0],
459  &nparts,
463  &edgecut,
464  &part[0]);
465 
466  // Otherwise use kway
467  else
468  Metis::METIS_PartGraphKway(&n,
469  &ncon,
470  &csr_graph.offsets[0],
471  &csr_graph.vals[0],
472  &vwgt[0],
475  &nparts,
479  &edgecut,
480  &part[0]);
481 
482  } // end processor 0 part
483 
484  // Broadcast the resulting partition
485  mesh.comm().broadcast(part);
486 
487  // Assign the returned processor ids. The part array contains
488  // the processor id for each active element, but in terms of
489  // the contiguous indexing we defined above
490  {
491  MeshBase::element_iterator it = beg;
492  for (; it!=end; ++it)
493  {
494  Elem * elem = *it;
495 
496  libmesh_assert (global_index_map.count(elem->id()));
497 
498  const dof_id_type elem_global_index =
499  global_index_map[elem->id()];
500 
501  libmesh_assert_less (elem_global_index, part.size());
502  const processor_id_type elem_procid =
503  static_cast<processor_id_type>(part[elem_global_index]);
504 
505  elem->processor_id() = elem_procid;
506  }
507  }
508 #endif
509 }
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:329
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
const class libmesh_nullptr_t libmesh_nullptr
IterBase * end
long double max(long double a, double b)
ErrorVector * _weights
Definition: partitioner.h:216
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Definition: partitioner.C:159
SimpleRange< I > as_range(const std::pair< I, I > &p)
Definition: simple_range.h:57
OStreamProxy err(std::cerr)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh)
staticinherited

These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0.

Definition at line 175 of file partitioner.C.

References libMesh::ParallelObject::n_processors().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

176 {
178 }
MeshBase & mesh
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:175
void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh,
const unsigned int  n 
)
staticinherited

Definition at line 182 of file partitioner.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::create_bounding_box(), end, libMesh::MeshCommunication::find_global_indices(), libMesh::MeshTools::n_elem(), libMesh::ParallelObject::n_processors(), libMesh::DofObject::processor_id(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

184 {
185  MeshBase::element_iterator it = mesh.unpartitioned_elements_begin();
186  const MeshBase::element_iterator end = mesh.unpartitioned_elements_end();
187 
188  const dof_id_type n_unpartitioned_elements = MeshTools::n_elem (it, end);
189 
190  // the unpartitioned elements must exist on all processors. If the range is empty on one
191  // it is empty on all, and we can quit right here.
192  if (!n_unpartitioned_elements) return;
193 
194  // find the target subdomain sizes
195  std::vector<dof_id_type> subdomain_bounds(mesh.n_processors());
196 
197  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
198  {
199  dof_id_type tgt_subdomain_size = 0;
200 
201  // watch out for the case that n_subdomains < n_processors
202  if (pid < n_subdomains)
203  {
204  tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
205 
206  if (pid < n_unpartitioned_elements%n_subdomains)
207  tgt_subdomain_size++;
208 
209  }
210 
211  //libMesh::out << "pid, #= " << pid << ", " << tgt_subdomain_size << std::endl;
212  if (pid == 0)
213  subdomain_bounds[0] = tgt_subdomain_size;
214  else
215  subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
216  }
217 
218  libmesh_assert_equal_to (subdomain_bounds.back(), n_unpartitioned_elements);
219 
220  // create the unique mapping for all unpartitioned elements independent of partitioning
221  // determine the global indexing for all the unpartitioned elements
222  std::vector<dof_id_type> global_indices;
223 
224  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.
225  // Only the indices for the elements we pass in are returned in the array.
226  MeshCommunication().find_global_indices (mesh.comm(),
228  global_indices);
229 
230  for (dof_id_type cnt=0; it != end; ++it)
231  {
232  Elem * elem = *it;
233 
234  libmesh_assert_less (cnt, global_indices.size());
235  const dof_id_type global_index =
236  global_indices[cnt++];
237 
238  libmesh_assert_less (global_index, subdomain_bounds.back());
239  libmesh_assert_less (global_index, n_unpartitioned_elements);
240 
241  const processor_id_type subdomain_id =
242  cast_int<processor_id_type>
243  (std::distance(subdomain_bounds.begin(),
244  std::upper_bound(subdomain_bounds.begin(),
245  subdomain_bounds.end(),
246  global_index)));
247  libmesh_assert_less (subdomain_id, n_subdomains);
248 
249  elem->processor_id() = subdomain_id;
250  //libMesh::out << "assigning " << global_index << " to " << subdomain_id << std::endl;
251  }
252 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:656
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:329
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh,
const unsigned int  n 
)
inherited

Repartitions the MeshBase into n parts. (Some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.) The default behavior is to simply call this->partition(mesh,n).

Definition at line 116 of file partitioner.C.

References libMesh::Partitioner::_do_repartition(), std::min(), libMesh::MeshBase::n_active_elem(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::MeshBase::set_n_partitions(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), and libMesh::Partitioner::single_partition().

Referenced by libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

118 {
119  // we cannot partition into more pieces than we have
120  // active elements!
121  const unsigned int n_parts =
122  static_cast<unsigned int>
123  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
124 
125  // Set the number of partitions in the mesh
126  mesh.set_n_partitions()=n_parts;
127 
128  if (n_parts == 1)
129  {
130  this->single_partition (mesh);
131  return;
132  }
133 
134  // First assign a temporary partitioning to any unpartitioned elements
136 
137  // Call the partitioning function
138  this->_do_repartition(mesh,n_parts);
139 
140  // Set the parent's processor ids
142 
143  // Set the node's processor ids
145 }
void single_partition(MeshBase &mesh)
Definition: partitioner.C:151
static void set_node_processor_ids(MeshBase &mesh)
Definition: partitioner.C:416
MeshBase & mesh
virtual void _do_repartition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.h:204
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:175
static void set_parent_processor_ids(MeshBase &mesh)
Definition: partitioner.C:256
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::repartition ( MeshBase mesh)
inherited

Repartitions the MeshBase into mesh.n_processors() parts. This is required since some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.

Definition at line 109 of file partitioner.C.

References libMesh::ParallelObject::n_processors(), and libMesh::Partitioner::repartition().

110 {
111  this->repartition(mesh,mesh.n_processors());
112 }
MeshBase & mesh
void repartition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.C:116
void libMesh::Partitioner::set_node_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the nodes. By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 416 of file partitioner.C.

References libMesh::MeshBase::active_element_ptr_range(), libMesh::ParallelObject::comm(), libMesh::DofObject::invalid_processor_id, mesh, std::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::n_nodes(), libMesh::ParallelObject::n_processors(), libMesh::Elem::node_ptr(), libMesh::MeshBase::node_ptr_range(), libMesh::MeshBase::node_ref(), libMesh::MeshBase::not_active_elements_begin(), libMesh::MeshBase::not_active_elements_end(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::Parallel::Communicator::send_receive(), libMesh::MeshBase::subactive_elements_begin(), libMesh::MeshBase::subactive_elements_end(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::UnstructuredMesh::all_first_order(), libMesh::Partitioner::partition(), libMesh::MeshBase::partition(), libMesh::Partitioner::partition_range(), libMesh::XdrIO::read(), libMesh::Partitioner::repartition(), and libMesh::BoundaryInfo::sync().

417 {
418  LOG_SCOPE("set_node_processor_ids()","Partitioner");
419 
420  // This function must be run on all processors at once
421  libmesh_parallel_only(mesh.comm());
422 
423  // If we have any unpartitioned elements at this
424  // stage there is a problem
425  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
426  mesh.unpartitioned_elements_end()) == 0);
427 
428 
429  // const dof_id_type orig_n_local_nodes = mesh.n_local_nodes();
430 
431  // libMesh::err << "[" << mesh.processor_id() << "]: orig_n_local_nodes="
432  // << orig_n_local_nodes << std::endl;
433 
434  // Build up request sets. Each node is currently owned by a processor because
435  // it is connected to an element owned by that processor. However, during the
436  // repartitioning phase that element may have been assigned a new processor id, but
437  // it is still resident on the original processor. We need to know where to look
438  // for new ids before assigning new ids, otherwise we may be asking the wrong processors
439  // for the wrong information.
440  //
441  // The only remaining issue is what to do with unpartitioned nodes. Since they are required
442  // to live on all processors we can simply rely on ourselves to number them properly.
443  std::vector<std::vector<dof_id_type>>
444  requested_node_ids(mesh.n_processors());
445 
446  // Loop over all the nodes, count the ones on each processor. We can skip ourself
447  std::vector<dof_id_type> ghost_nodes_from_proc(mesh.n_processors(), 0);
448 
449  for (auto & node : mesh.node_ptr_range())
450  {
451  libmesh_assert(node);
452  const processor_id_type current_pid = node->processor_id();
453  if (current_pid != mesh.processor_id() &&
454  current_pid != DofObject::invalid_processor_id)
455  {
456  libmesh_assert_less (current_pid, ghost_nodes_from_proc.size());
457  ghost_nodes_from_proc[current_pid]++;
458  }
459  }
460 
461  // We know how many objects live on each processor, so reserve()
462  // space for each.
463  for (processor_id_type pid=0; pid != mesh.n_processors(); ++pid)
464  requested_node_ids[pid].reserve(ghost_nodes_from_proc[pid]);
465 
466  // We need to get the new pid for each node from the processor
467  // which *currently* owns the node. We can safely skip ourself
468  for (auto & node : mesh.node_ptr_range())
469  {
470  libmesh_assert(node);
471  const processor_id_type current_pid = node->processor_id();
472  if (current_pid != mesh.processor_id() &&
473  current_pid != DofObject::invalid_processor_id)
474  {
475  libmesh_assert_less (current_pid, requested_node_ids.size());
476  libmesh_assert_less (requested_node_ids[current_pid].size(),
477  ghost_nodes_from_proc[current_pid]);
478  requested_node_ids[current_pid].push_back(node->id());
479  }
480 
481  // Unset any previously-set node processor ids
482  node->invalidate_processor_id();
483  }
484 
485  // Loop over all the active elements
486  for (auto & elem : mesh.active_element_ptr_range())
487  {
488  libmesh_assert(elem);
489 
490  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
491 
492  // For each node, set the processor ID to the min of
493  // its current value and this Element's processor id.
494  //
495  // TODO: we would probably get better parallel partitioning if
496  // we did something like "min for even numbered nodes, max for
497  // odd numbered". We'd need to be careful about how that would
498  // affect solution ordering for I/O, though.
499  for (unsigned int n=0; n<elem->n_nodes(); ++n)
500  elem->node_ptr(n)->processor_id() = std::min(elem->node_ptr(n)->processor_id(),
501  elem->processor_id());
502  }
503 
504  // And loop over the subactive elements, but don't reassign
505  // nodes that are already active on another processor.
506  MeshBase::element_iterator sub_it = mesh.subactive_elements_begin();
507  const MeshBase::element_iterator sub_end = mesh.subactive_elements_end();
508 
509  for ( ; sub_it != sub_end; ++sub_it)
510  {
511  Elem * elem = *sub_it;
512  libmesh_assert(elem);
513 
514  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
515 
516  for (unsigned int n=0; n<elem->n_nodes(); ++n)
517  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
518  elem->node_ptr(n)->processor_id() = elem->processor_id();
519  }
520 
521  // Same for the inactive elements -- we will have already gotten most of these
522  // nodes, *except* for the case of a parent with a subset of children which are
523  // ghost elements. In that case some of the parent nodes will not have been
524  // properly handled yet
525  MeshBase::element_iterator not_it = mesh.not_active_elements_begin();
526  const MeshBase::element_iterator not_end = mesh.not_active_elements_end();
527 
528  for ( ; not_it != not_end; ++not_it)
529  {
530  Elem * elem = *not_it;
531  libmesh_assert(elem);
532 
533  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
534 
535  for (unsigned int n=0; n<elem->n_nodes(); ++n)
536  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
537  elem->node_ptr(n)->processor_id() = elem->processor_id();
538  }
539 
540  // We can't assert that all nodes are connected to elements, because
541  // a DistributedMesh with NodeConstraints might have pulled in some
542  // remote nodes solely for evaluating those constraints.
543  // MeshTools::libmesh_assert_connected_nodes(mesh);
544 
545  // For such nodes, we'll do a sanity check later when making sure
546  // that we successfully reset their processor ids to something
547  // valid.
548 
549  // Next set node ids from other processors, excluding self
550  for (processor_id_type p=1; p != mesh.n_processors(); ++p)
551  {
552  // Trade my requests with processor procup and procdown
553  processor_id_type procup = cast_int<processor_id_type>
554  ((mesh.processor_id() + p) % mesh.n_processors());
555  processor_id_type procdown = cast_int<processor_id_type>
556  ((mesh.n_processors() + mesh.processor_id() - p) %
557  mesh.n_processors());
558  std::vector<dof_id_type> request_to_fill;
559  mesh.comm().send_receive(procup, requested_node_ids[procup],
560  procdown, request_to_fill);
561 
562  // Fill those requests in-place
563  for (std::size_t i=0; i != request_to_fill.size(); ++i)
564  {
565  Node & node = mesh.node_ref(request_to_fill[i]);
566  const processor_id_type new_pid = node.processor_id();
567 
568  // We may have an invalid processor_id() on nodes that have been
569  // "detached" from coarsened-away elements but that have not yet
570  // themselves been removed.
571  // libmesh_assert_not_equal_to (new_pid, DofObject::invalid_processor_id);
572  // libmesh_assert_less (new_pid, mesh.n_partitions()); // this is the correct test --
573  request_to_fill[i] = new_pid; // the number of partitions may
574  } // not equal the number of processors
575 
576  // Trade back the results
577  std::vector<dof_id_type> filled_request;
578  mesh.comm().send_receive(procdown, request_to_fill,
579  procup, filled_request);
580  libmesh_assert_equal_to (filled_request.size(), requested_node_ids[procup].size());
581 
582  // And copy the id changes we've now been informed of
583  for (std::size_t i=0; i != filled_request.size(); ++i)
584  {
585  Node & node = mesh.node_ref(requested_node_ids[procup][i]);
586 
587  // this is the correct test -- the number of partitions may
588  // not equal the number of processors
589 
590  // But: we may have an invalid processor_id() on nodes that
591  // have been "detached" from coarsened-away elements but
592  // that have not yet themselves been removed.
593  // libmesh_assert_less (filled_request[i], mesh.n_partitions());
594 
595  node.processor_id(cast_int<processor_id_type>(filled_request[i]));
596  }
597  }
598 
599 #ifdef DEBUG
600  MeshTools::libmesh_assert_valid_procids<Node>(mesh);
601 #endif
602 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:656
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
static const processor_id_type invalid_processor_id
Definition: dof_object.h:335
long double min(long double a, double b)
void libMesh::Partitioner::set_parent_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the inactive parent elements. A parent's processor ID is the same as its first child.

Definition at line 256 of file partitioner.C.

References libMesh::MeshBase::active_element_ptr_range(), libMesh::Elem::active_family_tree(), libMesh::MeshBase::ancestor_elements_begin(), libMesh::MeshBase::ancestor_elements_end(), libMesh::Elem::child_ref_range(), libMesh::ParallelObject::comm(), libMesh::Partitioner::communication_blocksize, libMesh::DofObject::id(), libMesh::DofObject::invalid_processor_id, libMesh::DofObject::invalidate_processor_id(), libMesh::MeshBase::is_serial(), libMesh::libmesh_ignore(), libMesh::MeshBase::max_elem_id(), std::min(), libMesh::Parallel::Communicator::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::parent(), libMesh::processor_id(), libMesh::DofObject::processor_id(), libMesh::Elem::total_family_tree(), libMesh::MeshBase::unpartitioned_elements_begin(), and libMesh::MeshBase::unpartitioned_elements_end().

Referenced by libMesh::Partitioner::partition(), libMesh::Partitioner::partition_range(), and libMesh::Partitioner::repartition().

257 {
258  // Ignore the parameter when !LIBMESH_ENABLE_AMR
260 
261  LOG_SCOPE("set_parent_processor_ids()", "Partitioner");
262 
263 #ifdef LIBMESH_ENABLE_AMR
264 
265  // If the mesh is serial we have access to all the elements,
266  // in particular all the active ones. We can therefore set
267  // the parent processor ids indirectly through their children, and
268  // set the subactive processor ids while examining their active
269  // ancestors.
270  // By convention a parent is assigned to the minimum processor
271  // of all its children, and a subactive is assigned to the processor
272  // of its active ancestor.
273  if (mesh.is_serial())
274  {
275  for (auto & child : mesh.active_element_ptr_range())
276  {
277  // First set descendents
278  std::vector<const Elem *> subactive_family;
279  child->total_family_tree(subactive_family);
280  for (std::size_t i = 0; i != subactive_family.size(); ++i)
281  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
282 
283  // Then set ancestors
284  Elem * parent = child->parent();
285 
286  while (parent)
287  {
288  // invalidate the parent id, otherwise the min below
289  // will not work if the current parent id is less
290  // than all the children!
291  parent->invalidate_processor_id();
292 
293  for (auto & child : parent->child_ref_range())
294  {
295  libmesh_assert(!child.is_remote());
296  libmesh_assert_not_equal_to (child.processor_id(), DofObject::invalid_processor_id);
297  parent->processor_id() = std::min(parent->processor_id(),
298  child.processor_id());
299  }
300  parent = parent->parent();
301  }
302  }
303  }
304 
305  // When the mesh is parallel we cannot guarantee that parents have access to
306  // all their children.
307  else
308  {
309  // Setting subactive processor ids is easy: we can guarantee
310  // that children have access to all their parents.
311 
312  // Loop over all the active elements in the mesh
313  for (auto & child : mesh.active_element_ptr_range())
314  {
315  std::vector<const Elem *> subactive_family;
316  child->total_family_tree(subactive_family);
317  for (std::size_t i = 0; i != subactive_family.size(); ++i)
318  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
319  }
320 
321  // When the mesh is parallel we cannot guarantee that parents have access to
322  // all their children.
323 
324  // We will use a brute-force approach here. Each processor finds its parent
325  // elements and sets the parent pid to the minimum of its
326  // semilocal descendants.
327  // A global reduction is then performed to make sure the true minimum is found.
328  // As noted, this is required because we cannot guarantee that a parent has
329  // access to all its children on any single processor.
330  libmesh_parallel_only(mesh.comm());
331  libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
332  mesh.unpartitioned_elements_end()) == 0);
333 
334  const dof_id_type max_elem_id = mesh.max_elem_id();
335 
336  std::vector<processor_id_type>
337  parent_processor_ids (std::min(communication_blocksize,
338  max_elem_id));
339 
340  for (dof_id_type blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
341  {
342  last_elem_id =
343  std::min(static_cast<dof_id_type>((blk+1)*communication_blocksize),
344  max_elem_id);
345  const dof_id_type first_elem_id = blk*communication_blocksize;
346 
347  std::fill (parent_processor_ids.begin(),
348  parent_processor_ids.end(),
350 
351  // first build up local contributions to parent_processor_ids
352  MeshBase::element_iterator not_it = mesh.ancestor_elements_begin();
353  const MeshBase::element_iterator not_end = mesh.ancestor_elements_end();
354 
355  bool have_parent_in_block = false;
356 
357  for ( ; not_it != not_end; ++not_it)
358  {
359  Elem * parent = *not_it;
360 
361  const dof_id_type parent_idx = parent->id();
362  libmesh_assert_less (parent_idx, max_elem_id);
363 
364  if ((parent_idx >= first_elem_id) &&
365  (parent_idx < last_elem_id))
366  {
367  have_parent_in_block = true;
369 
370  std::vector<const Elem *> active_family;
371  parent->active_family_tree(active_family);
372  for (std::size_t i = 0; i != active_family.size(); ++i)
373  parent_pid = std::min (parent_pid, active_family[i]->processor_id());
374 
375  const dof_id_type packed_idx = parent_idx - first_elem_id;
376  libmesh_assert_less (packed_idx, parent_processor_ids.size());
377 
378  parent_processor_ids[packed_idx] = parent_pid;
379  }
380  }
381 
382  // then find the global minimum
383  mesh.comm().min (parent_processor_ids);
384 
385  // and assign the ids, if we have a parent in this block.
386  if (have_parent_in_block)
387  for (not_it = mesh.ancestor_elements_begin();
388  not_it != not_end; ++not_it)
389  {
390  Elem * parent = *not_it;
391 
392  const dof_id_type parent_idx = parent->id();
393 
394  if ((parent_idx >= first_elem_id) &&
395  (parent_idx < last_elem_id))
396  {
397  const dof_id_type packed_idx = parent_idx - first_elem_id;
398  libmesh_assert_less (packed_idx, parent_processor_ids.size());
399 
400  const processor_id_type parent_pid =
401  parent_processor_ids[packed_idx];
402 
403  libmesh_assert_not_equal_to (parent_pid, DofObject::invalid_processor_id);
404 
405  parent->processor_id() = parent_pid;
406  }
407  }
408  }
409  }
410 
411 #endif // LIBMESH_ENABLE_AMR
412 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:656
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
static const processor_id_type invalid_processor_id
Definition: dof_object.h:335
void libmesh_ignore(const T &)
static const dof_id_type communication_blocksize
Definition: partitioner.h:211
processor_id_type processor_id()
Definition: libmesh_base.h:96
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64
void libMesh::Partitioner::single_partition ( MeshBase mesh)
protectedinherited

Trivially "partitions" the mesh for one processor. Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Definition at line 151 of file partitioner.C.

References libMesh::MeshBase::elements_begin(), libMesh::MeshBase::elements_end(), and libMesh::Partitioner::single_partition_range().

Referenced by libMesh::SubdomainPartitioner::_do_partition(), libMesh::Partitioner::attach_weights(), libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

152 {
153  this->single_partition_range(mesh.elements_begin(),
154  mesh.elements_end());
155 }
MeshBase & mesh
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Definition: partitioner.C:159
void libMesh::Partitioner::single_partition_range ( MeshBase::element_iterator  it,
MeshBase::element_iterator  end 
)
protectedinherited

Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end).

Definition at line 159 of file partitioner.C.

References end, libMesh::Elem::n_nodes(), libMesh::Elem::node_ptr(), and libMesh::DofObject::processor_id().

Referenced by libMesh::Partitioner::attach_weights(), libMesh::LinearPartitioner::partition_range(), libMesh::MappedSubdomainPartitioner::partition_range(), libMesh::CentroidPartitioner::partition_range(), and libMesh::Partitioner::single_partition().

161 {
162  LOG_SCOPE("single_partition_range()", "Partitioner");
163 
164  for ( ; it != end; ++it)
165  {
166  Elem * elem = *it;
167  elem->processor_id() = 0;
168 
169  // Assign all this element's nodes to processor 0 as well.
170  for (unsigned int n=0; n<elem->n_nodes(); ++n)
171  elem->node_ptr(n)->processor_id() = 0;
172  }
173 }
IterBase * end

Member Data Documentation

ErrorVector* libMesh::Partitioner::_weights
protectedinherited

The weights that might be used for partitioning.

Definition at line 216 of file partitioner.h.

Referenced by attach_weights().

const dof_id_type libMesh::Partitioner::communication_blocksize = 1000000
staticprotectedinherited

The blocksize to use when doing blocked parallel communication. This limits the maximum vector size which can be used in a single communication step.

Definition at line 211 of file partitioner.h.

Referenced by libMesh::Partitioner::set_parent_processor_ids().


The documentation for this class was generated from the following files: