libMesh::ParmetisPartitioner Class Reference

Partitioner which provides an interface to ParMETIS. More...

#include <parmetis_partitioner.h>

Inheritance diagram for libMesh::ParmetisPartitioner:

Public Member Functions

 ParmetisPartitioner ()
 
 ParmetisPartitioner (const ParmetisPartitioner &other)
 
ParmetisPartitioneroperator= (const ParmetisPartitioner &)=delete
 
 ParmetisPartitioner (ParmetisPartitioner &&)=default
 
ParmetisPartitioneroperator= (ParmetisPartitioner &&)=default
 
virtual ~ParmetisPartitioner ()
 
virtual std::unique_ptr< Partitionerclone () const override
 
virtual void partition (MeshBase &mesh, const unsigned int n)
 
virtual void partition (MeshBase &mesh)
 
virtual void partition_range (MeshBase &, MeshBase::element_iterator, MeshBase::element_iterator, const unsigned int)
 
void repartition (MeshBase &mesh, const unsigned int n)
 
void repartition (MeshBase &mesh)
 
virtual void attach_weights (ErrorVector *)
 

Static Public Member Functions

static void partition_unpartitioned_elements (MeshBase &mesh)
 
static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n)
 
static void set_parent_processor_ids (MeshBase &mesh)
 
static void set_node_processor_ids (MeshBase &mesh)
 
static void processor_pairs_to_interface_nodes (MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
 
static void set_interface_node_processor_ids_linear (MeshBase &mesh)
 
static void set_interface_node_processor_ids_BFS (MeshBase &mesh)
 
static void set_interface_node_processor_ids_petscpartitioner (MeshBase &mesh)
 

Protected Member Functions

virtual void _do_repartition (MeshBase &mesh, const unsigned int n) override
 
virtual void _do_partition (MeshBase &mesh, const unsigned int n) override
 
virtual void build_graph (const MeshBase &mesh) override
 
void single_partition (MeshBase &mesh)
 
void single_partition_range (MeshBase::element_iterator it, MeshBase::element_iterator end)
 
virtual void _find_global_index_by_pid_map (const MeshBase &mesh)
 
void assign_partitioning (const MeshBase &mesh, const std::vector< dof_id_type > &parts)
 

Protected Attributes

ErrorVector_weights
 
std::unordered_map< dof_id_type, dof_id_type_global_index_by_pid_map
 
std::vector< dof_id_type_n_active_elem_on_proc
 
std::vector< std::vector< dof_id_type > > _dual_graph
 
std::vector< Elem * > _local_id_to_elem
 

Static Protected Attributes

static const dof_id_type communication_blocksize = 1000000
 

Private Member Functions

void initialize (const MeshBase &mesh, const unsigned int n_sbdmns)
 

Private Attributes

std::unique_ptr< ParmetisHelper_pmetis
 

Detailed Description

Partitioner which provides an interface to ParMETIS.

The ParmetisPartitioner uses the Parmetis graph partitioner to partition the elements.

Author
Benjamin S. Kirk
Date
2003

Definition at line 47 of file parmetis_partitioner.h.

Constructor & Destructor Documentation

◆ ParmetisPartitioner() [1/3]

libMesh::ParmetisPartitioner::ParmetisPartitioner ( )

Default and copy ctors.

◆ ParmetisPartitioner() [2/3]

libMesh::ParmetisPartitioner::ParmetisPartitioner ( const ParmetisPartitioner other)

◆ ParmetisPartitioner() [3/3]

libMesh::ParmetisPartitioner::ParmetisPartitioner ( ParmetisPartitioner &&  )
default

Move ctor, move assignment operator, and destructor are all explicitly inline-defaulted for this class.

◆ ~ParmetisPartitioner()

virtual libMesh::ParmetisPartitioner::~ParmetisPartitioner ( )
virtual

The destructor is out-of-line-defaulted to play nice with forward declarations.

Member Function Documentation

◆ _do_partition()

virtual void libMesh::ParmetisPartitioner::_do_partition ( MeshBase mesh,
const unsigned int  n 
)
overrideprotectedvirtual

Partition the MeshBase into n subdomains.

Implements libMesh::Partitioner.

◆ _do_repartition()

void libMesh::ParmetisPartitioner::_do_repartition ( MeshBase mesh,
const unsigned int  n 
)
overrideprotectedvirtual

Parmetis can handle dynamically repartitioning a mesh such that the redistribution costs are minimized. This method takes a previously partitioned mesh (which may have then been adaptively refined) and repartitions it.

Reimplemented from libMesh::Partitioner.

Definition at line 93 of file parmetis_partitioner.C.

References mesh, libMesh::MIN_ELEM_PER_PROC, libMesh::out, libMesh::Partitioner::partition(), and libMesh::MetisPartitioner::partition_range().

95 {
96  // This function must be run on all processors at once
97  libmesh_parallel_only(mesh.comm());
98 
99  // Check for easy returns
100  if (!mesh.n_elem())
101  return;
102 
103  if (n_sbdmns == 1)
104  {
105  this->single_partition(mesh);
106  return;
107  }
108 
109  libmesh_assert_greater (n_sbdmns, 0);
110 
111  // What to do if the Parmetis library IS NOT present
112 #ifndef LIBMESH_HAVE_PARMETIS
113 
114  libmesh_do_once(
115  libMesh::out << "ERROR: The library has been built without" << std::endl
116  << "Parmetis support. Using a Metis" << std::endl
117  << "partitioner instead!" << std::endl;);
118 
119  MetisPartitioner mp;
120 
121  // Don't just call partition() here; that would end up calling
122  // post-element-partitioning work redundantly (and at the moment
123  // incorrectly)
124  mp.partition_range (mesh, mesh.active_elements_begin(),
125  mesh.active_elements_end(), n_sbdmns);
126 
127  // What to do if the Parmetis library IS present
128 #else
129 
130  // Revert to METIS on one processor.
131  if (mesh.n_processors() == 1)
132  {
133  // Make sure the mesh knows it's serial
134  mesh.allgather();
135 
136  MetisPartitioner mp;
137  // Don't just call partition() here; that would end up calling
138  // post-element-partitioning work redundantly (and at the moment
139  // incorrectly)
140  mp.partition_range (mesh, mesh.active_elements_begin(),
141  mesh.active_elements_end(), n_sbdmns);
142  return;
143  }
144 
145  LOG_SCOPE("repartition()", "ParmetisPartitioner");
146 
147  // Initialize the data structures required by ParMETIS
148  this->initialize (mesh, n_sbdmns);
149 
150  // Make sure all processors have enough active local elements.
151  // Parmetis tends to crash when it's given only a couple elements
152  // per partition.
153  {
154  bool all_have_enough_elements = true;
155  for (std::size_t pid=0; pid<_n_active_elem_on_proc.size(); pid++)
157  all_have_enough_elements = false;
158 
159  // Parmetis will not work unless each processor has some
160  // elements. Specifically, it will abort when passed a nullptr
161  // partition array on *any* of the processors.
162  if (!all_have_enough_elements)
163  {
164  // FIXME: revert to METIS, although this requires a serial mesh
165  MeshSerializer serialize(mesh);
166  MetisPartitioner mp;
167  mp.partition (mesh, n_sbdmns);
168  return;
169  }
170  }
171 
172  // build the graph corresponding to the mesh
173  this->build_graph (mesh);
174 
175 
176  // Partition the graph
177  std::vector<Parmetis::idx_t> vsize(_pmetis->vwgt.size(), 1);
178  Parmetis::real_t itr = 1000000.0;
179  MPI_Comm mpi_comm = mesh.comm().get();
180 
181  // Call the ParMETIS adaptive repartitioning method. This respects the
182  // original partitioning when computing the new partitioning so as to
183  // minimize the required data redistribution.
184  Parmetis::ParMETIS_V3_AdaptiveRepart(_pmetis->vtxdist.empty() ? nullptr : _pmetis->vtxdist.data(),
185  _pmetis->xadj.empty() ? nullptr : _pmetis->xadj.data(),
186  _pmetis->adjncy.empty() ? nullptr : _pmetis->adjncy.data(),
187  _pmetis->vwgt.empty() ? nullptr : _pmetis->vwgt.data(),
188  vsize.empty() ? nullptr : vsize.data(),
189  nullptr,
190  &_pmetis->wgtflag,
191  &_pmetis->numflag,
192  &_pmetis->ncon,
193  &_pmetis->nparts,
194  _pmetis->tpwgts.empty() ? nullptr : _pmetis->tpwgts.data(),
195  _pmetis->ubvec.empty() ? nullptr : _pmetis->ubvec.data(),
196  &itr,
197  _pmetis->options.data(),
198  &_pmetis->edgecut,
199  _pmetis->part.empty() ? nullptr : reinterpret_cast<Parmetis::idx_t *>(_pmetis->part.data()),
200  &mpi_comm);
201 
202  // Assign the returned processor ids
203  this->assign_partitioning (mesh, _pmetis->part);
204 
205 #endif // #ifndef LIBMESH_HAVE_PARMETIS ... else ...
206 
207 }
std::unique_ptr< ParmetisHelper > _pmetis
void single_partition(MeshBase &mesh)
Definition: partitioner.C:159
void initialize(const MeshBase &mesh, const unsigned int n_sbdmns)
MeshBase & mesh
virtual void build_graph(const MeshBase &mesh) override
void assign_partitioning(const MeshBase &mesh, const std::vector< dof_id_type > &parts)
Definition: partitioner.C:1113
const unsigned int MIN_ELEM_PER_PROC
OStreamProxy out(std::cout)
std::vector< dof_id_type > _n_active_elem_on_proc
Definition: partitioner.h:281

◆ _find_global_index_by_pid_map()

void libMesh::Partitioner::_find_global_index_by_pid_map ( const MeshBase mesh)
protectedvirtualinherited

Construct contiguous global indices for the current partitioning. The global indices are ordered part-by-part

Definition at line 907 of file partitioner.C.

References libMesh::Partitioner::_global_index_by_pid_map, libMesh::Partitioner::_n_active_elem_on_proc, libMesh::as_range(), libMesh::MeshTools::create_bounding_box(), libMesh::MeshCommunication::find_local_indices(), mesh, and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::Partitioner::build_graph().

908 {
909  const dof_id_type n_active_local_elem = mesh.n_active_local_elem();
910 
911  // Find the number of active elements on each processor. We cannot use
912  // mesh.n_active_elem_on_proc(pid) since that only returns the number of
913  // elements assigned to pid which are currently stored on the calling
914  // processor. This will not in general be correct for parallel meshes
915  // when (pid!=mesh.processor_id()).
916  _n_active_elem_on_proc.resize(mesh.n_processors());
917  mesh.comm().allgather(n_active_local_elem, _n_active_elem_on_proc);
918 
919  libMesh::BoundingBox bbox =
921 
922  _global_index_by_pid_map.clear();
923 
924  // create the mapping which is contiguous by processor
925  MeshCommunication().find_local_indices (bbox,
926  mesh.active_local_elements_begin(),
927  mesh.active_local_elements_end(),
929 
930  SyncLocalIDs sync(_global_index_by_pid_map);
931 
933  (mesh.comm(), mesh.active_elements_begin(), mesh.active_elements_end(), sync);
934 
935  dof_id_type pid_offset=0;
936  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
937  {
938  for (const auto & elem : as_range(mesh.active_pid_elements_begin(pid),
939  mesh.active_pid_elements_end(pid)))
940  {
941  libmesh_assert_less (_global_index_by_pid_map[elem->id()], _n_active_elem_on_proc[pid]);
942 
943  _global_index_by_pid_map[elem->id()] += pid_offset;
944  }
945 
946  pid_offset += _n_active_elem_on_proc[pid];
947  }
948 }
std::unordered_map< dof_id_type, dof_id_type > _global_index_by_pid_map
Definition: partitioner.h:272
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:386
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
SimpleRange< I > as_range(const std::pair< I, I > &p)
Definition: simple_range.h:57
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
std::vector< dof_id_type > _n_active_elem_on_proc
Definition: partitioner.h:281
uint8_t dof_id_type
Definition: id_types.h:64

◆ assign_partitioning()

void libMesh::Partitioner::assign_partitioning ( const MeshBase mesh,
const std::vector< dof_id_type > &  parts 
)
protectedinherited

Assign the computed partitioning to the mesh.

Definition at line 1113 of file partitioner.C.

References libMesh::Partitioner::_global_index_by_pid_map, libMesh::Partitioner::_n_active_elem_on_proc, data, mesh, and libMesh::Parallel::pull_parallel_vector_data().

1114 {
1115  LOG_SCOPE("assign_partitioning()", "ParmetisPartitioner");
1116 
1117  // This function must be run on all processors at once
1118  libmesh_parallel_only(mesh.comm());
1119 
1120  dof_id_type first_local_elem = 0;
1121  for (processor_id_type pid=0; pid < mesh.processor_id(); pid++)
1122  first_local_elem += _n_active_elem_on_proc[pid];
1123 
1124 #ifndef NDEBUG
1125  const dof_id_type n_active_local_elem = mesh.n_active_local_elem();
1126 #endif
1127 
1128  std::map<processor_id_type, std::vector<dof_id_type>>
1129  requested_ids;
1130 
1131  // Results to gather from each processor - kept in a map so we
1132  // do only one loop over elements after all receives are done.
1133  std::map<processor_id_type, std::vector<processor_id_type>>
1134  filled_request;
1135 
1136  for (auto & elem : mesh.active_element_ptr_range())
1137  {
1138  // we need to get the index from the owning processor
1139  // (note we cannot assign it now -- we are iterating
1140  // over elements again and this will be bad!)
1141  requested_ids[elem->processor_id()].push_back(elem->id());
1142  }
1143 
1144  auto gather_functor =
1145  [this,
1146  & parts,
1147 #ifndef NDEBUG
1148  & mesh,
1149  n_active_local_elem,
1150 #endif
1151  first_local_elem]
1152  (processor_id_type, const std::vector<dof_id_type> & ids,
1153  std::vector<processor_id_type> & data)
1154  {
1155  const std::size_t ids_size = ids.size();
1156  data.resize(ids.size());
1157 
1158  for (std::size_t i=0; i != ids_size; i++)
1159  {
1160  const dof_id_type requested_elem_index = ids[i];
1161 
1162  libmesh_assert(_global_index_by_pid_map.count(requested_elem_index));
1163 
1164  const dof_id_type global_index_by_pid =
1165  _global_index_by_pid_map[requested_elem_index];
1166 
1167  const dof_id_type local_index =
1168  global_index_by_pid - first_local_elem;
1169 
1170  libmesh_assert_less (local_index, parts.size());
1171  libmesh_assert_less (local_index, n_active_local_elem);
1172 
1173  const processor_id_type elem_procid =
1174  cast_int<processor_id_type>(parts[local_index]);
1175 
1176  libmesh_assert_less (elem_procid, mesh.n_partitions());
1177 
1178  data[i] = elem_procid;
1179  }
1180  };
1181 
1182  auto action_functor =
1183  [&filled_request]
1184  (processor_id_type pid,
1185  const std::vector<dof_id_type> &,
1186  const std::vector<processor_id_type> & new_procids)
1187  {
1188  filled_request[pid] = new_procids;
1189  };
1190 
1191  // Trade requests with other processors
1192  const processor_id_type * ex = nullptr;
1194  (mesh.comm(), requested_ids, gather_functor, action_functor, ex);
1195 
1196  // and finally assign the partitioning.
1197  // note we are iterating in exactly the same order
1198  // used to build up the request, so we can expect the
1199  // required entries to be in the proper sequence.
1200  std::vector<unsigned int> counters(mesh.n_processors(), 0);
1201  for (auto & elem : mesh.active_element_ptr_range())
1202  {
1203  const processor_id_type current_pid = elem->processor_id();
1204 
1205  libmesh_assert_less (counters[current_pid], requested_ids[current_pid].size());
1206 
1207  const processor_id_type elem_procid =
1208  filled_request[current_pid][counters[current_pid]++];
1209 
1210  libmesh_assert_less (elem_procid, mesh.n_partitions());
1211  elem->processor_id() = elem_procid;
1212  }
1213 }
std::unordered_map< dof_id_type, dof_id_type > _global_index_by_pid_map
Definition: partitioner.h:272
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
void pull_parallel_vector_data(const Communicator &comm, const MapToVectors &queries, RequestContainer &reqs, GatherFunctor &gather_data, ActionFunctor &act_on_data, const datum *example)
IterBase * data
std::vector< dof_id_type > _n_active_elem_on_proc
Definition: partitioner.h:281
uint8_t dof_id_type
Definition: id_types.h:64

◆ attach_weights()

virtual void libMesh::Partitioner::attach_weights ( ErrorVector )
inlinevirtualinherited

Attach weights that can be used for partitioning. This ErrorVector should be exactly the same on every processor and should have mesh->max_elem_id() entries.

Reimplemented in libMesh::MetisPartitioner.

Definition at line 203 of file partitioner.h.

203 { libmesh_not_implemented(); }

◆ build_graph()

void libMesh::ParmetisPartitioner::build_graph ( const MeshBase mesh)
overrideprotectedvirtual

Build the graph.

Reimplemented from libMesh::Partitioner.

Definition at line 388 of file parmetis_partitioner.C.

References mesh.

389 {
390  LOG_SCOPE("build_graph()", "ParmetisPartitioner");
391 
392  // build the graph in distributed CSR format. Note that
393  // the edges in the graph will correspond to
394  // face neighbors
395  const dof_id_type n_active_local_elem = mesh.n_active_local_elem();
396 
398 
399  dof_id_type graph_size=0;
400 
401  for (auto & row: _dual_graph)
402  graph_size += cast_int<dof_id_type>(row.size());
403 
404  // Reserve space in the adjacency array
405  _pmetis->xadj.clear();
406  _pmetis->xadj.reserve (n_active_local_elem + 1);
407  _pmetis->adjncy.clear();
408  _pmetis->adjncy.reserve (graph_size);
409 
410  for (auto & graph_row : _dual_graph)
411  {
412  _pmetis->xadj.push_back(cast_int<int>(_pmetis->adjncy.size()));
413  _pmetis->adjncy.insert(_pmetis->adjncy.end(),
414  graph_row.begin(),
415  graph_row.end());
416  }
417 
418  // The end of the adjacency array for the last elem
419  _pmetis->xadj.push_back(cast_int<int>(_pmetis->adjncy.size()));
420 
421  libmesh_assert_equal_to (_pmetis->xadj.size(), n_active_local_elem+1);
422  libmesh_assert_equal_to (_pmetis->adjncy.size(), graph_size);
423 }
std::unique_ptr< ParmetisHelper > _pmetis
MeshBase & mesh
virtual void build_graph(const MeshBase &mesh)
Definition: partitioner.C:950
std::vector< std::vector< dof_id_type > > _dual_graph
Definition: partitioner.h:288
uint8_t dof_id_type
Definition: id_types.h:64

◆ clone()

virtual std::unique_ptr<Partitioner> libMesh::ParmetisPartitioner::clone ( ) const
inlineoverridevirtual
Returns
A copy of this partitioner wrapped in a smart pointer.

Implements libMesh::Partitioner.

Definition at line 79 of file parmetis_partitioner.h.

80  {
81  return libmesh_make_unique<ParmetisPartitioner>(*this);
82  }

◆ initialize()

void libMesh::ParmetisPartitioner::initialize ( const MeshBase mesh,
const unsigned int  n_sbdmns 
)
private

Initialize data structures.

Definition at line 214 of file parmetis_partitioner.C.

References libMesh::MeshTools::create_bounding_box(), end, libMesh::MeshCommunication::find_global_indices(), libMesh::DofObject::id(), mesh, and std::min().

216 {
217  LOG_SCOPE("initialize()", "ParmetisPartitioner");
218 
219  const dof_id_type n_active_local_elem = mesh.n_active_local_elem();
220  // Set parameters.
221  _pmetis->wgtflag = 2; // weights on vertices only
222  _pmetis->ncon = 1; // one weight per vertex
223  _pmetis->numflag = 0; // C-style 0-based numbering
224  _pmetis->nparts = static_cast<Parmetis::idx_t>(n_sbdmns); // number of subdomains to create
225  _pmetis->edgecut = 0; // the numbers of edges cut by the
226  // partition
227 
228  // Initialize data structures for ParMETIS
229  _pmetis->vtxdist.assign (mesh.n_processors()+1, 0);
230  _pmetis->tpwgts.assign (_pmetis->nparts, 1./_pmetis->nparts);
231  _pmetis->ubvec.assign (_pmetis->ncon, 1.05);
232  _pmetis->part.assign (n_active_local_elem, 0);
233  _pmetis->options.resize (5);
234  _pmetis->vwgt.resize (n_active_local_elem);
235 
236  // Set the options
237  _pmetis->options[0] = 1; // don't use default options
238  _pmetis->options[1] = 0; // default (level of timing)
239  _pmetis->options[2] = 15; // random seed (default)
240  _pmetis->options[3] = 2; // processor distribution and subdomain distribution are decoupled
241 
242  // ParMetis expects the elements to be numbered in contiguous blocks
243  // by processor, i.e. [0, ne0), [ne0, ne0+ne1), ...
244  // Since we only partition active elements we should have no expectation
245  // that we currently have such a distribution. So we need to create it.
246  // Also, at the same time we are going to map all the active elements into a globally
247  // unique range [0,n_active_elem) which is *independent* of the current partitioning.
248  // This can be fed to ParMetis as the initial partitioning of the subdomains (decoupled
249  // from the partitioning of the objects themselves). This allows us to get the same
250  // resultant partitioning independent of the input partitioning.
251  libMesh::BoundingBox bbox =
253 
255 
256 
257  // count the total number of active elements in the mesh. Note we cannot
258  // use mesh.n_active_elem() in general since this only returns the number
259  // of active elements which are stored on the calling processor.
260  // We should not use n_active_elem for any allocation because that will
261  // be inherently unscalable, but it can be useful for libmesh_assertions.
262  dof_id_type n_active_elem=0;
263 
264  // Set up the vtxdist array. This will be the same on each processor.
265  // ***** Consult the Parmetis documentation. *****
266  libmesh_assert_equal_to (_pmetis->vtxdist.size(),
267  cast_int<std::size_t>(mesh.n_processors()+1));
268  libmesh_assert_equal_to (_pmetis->vtxdist[0], 0);
269 
270  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
271  {
272  _pmetis->vtxdist[pid+1] = _pmetis->vtxdist[pid] + _n_active_elem_on_proc[pid];
273  n_active_elem += _n_active_elem_on_proc[pid];
274  }
275  libmesh_assert_equal_to (_pmetis->vtxdist.back(), static_cast<Parmetis::idx_t>(n_active_elem));
276 
277 
278  // Maps active element ids into a contiguous range independent of partitioning.
279  // (only needs local scope)
280  std::unordered_map<dof_id_type, dof_id_type> global_index_map;
281 
282  {
283  std::vector<dof_id_type> global_index;
284 
285  // create the unique mapping for all active elements independent of partitioning
286  {
287  MeshBase::const_element_iterator it = mesh.active_elements_begin();
288  const MeshBase::const_element_iterator end = mesh.active_elements_end();
289 
290  // Calling this on all processors a unique range in [0,n_active_elem) is constructed.
291  // Only the indices for the elements we pass in are returned in the array.
292  MeshCommunication().find_global_indices (mesh.comm(),
293  bbox, it, end,
294  global_index);
295 
296  for (dof_id_type cnt=0; it != end; ++it)
297  {
298  const Elem * elem = *it;
299  // vectormap::count forces a sort, which is too expensive
300  // in a loop
301  // libmesh_assert (!global_index_map.count(elem->id()));
302  libmesh_assert_less (cnt, global_index.size());
303  libmesh_assert_less (global_index[cnt], n_active_elem);
304 
305  global_index_map.insert(std::make_pair(elem->id(), global_index[cnt++]));
306  }
307  }
308  // really, shouldn't be close!
309  libmesh_assert_less_equal (global_index_map.size(), n_active_elem);
310  libmesh_assert_less_equal (_global_index_by_pid_map.size(), n_active_elem);
311 
312  // At this point the two maps should be the same size. If they are not
313  // then the number of active elements is not the same as the sum over all
314  // processors of the number of active elements per processor, which means
315  // there must be some unpartitioned objects out there.
316  if (global_index_map.size() != _global_index_by_pid_map.size())
317  libmesh_error_msg("ERROR: ParmetisPartitioner cannot handle unpartitioned objects!");
318  }
319 
320  // Finally, we need to initialize the vertex (partition) weights and the initial subdomain
321  // mapping. The subdomain mapping will be independent of the processor mapping, and is
322  // defined by a simple mapping of the global indices we just found.
323  {
324  std::vector<dof_id_type> subdomain_bounds(mesh.n_processors());
325 
326  const dof_id_type first_local_elem = _pmetis->vtxdist[mesh.processor_id()];
327 
328  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
329  {
330  dof_id_type tgt_subdomain_size = 0;
331 
332  // watch out for the case that n_subdomains < n_processors
333  if (pid < static_cast<unsigned int>(_pmetis->nparts))
334  {
335  tgt_subdomain_size = n_active_elem/std::min
336  (cast_int<Parmetis::idx_t>(mesh.n_processors()), _pmetis->nparts);
337 
338  if (pid < n_active_elem%_pmetis->nparts)
339  tgt_subdomain_size++;
340  }
341  if (pid == 0)
342  subdomain_bounds[0] = tgt_subdomain_size;
343  else
344  subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
345  }
346 
347  libmesh_assert_equal_to (subdomain_bounds.back(), n_active_elem);
348 
349  for (const auto & elem : mesh.active_local_element_ptr_range())
350  {
351  libmesh_assert (_global_index_by_pid_map.count(elem->id()));
352  const dof_id_type global_index_by_pid =
353  _global_index_by_pid_map[elem->id()];
354  libmesh_assert_less (global_index_by_pid, n_active_elem);
355 
356  const dof_id_type local_index =
357  global_index_by_pid - first_local_elem;
358 
359  libmesh_assert_less (local_index, n_active_local_elem);
360  libmesh_assert_less (local_index, _pmetis->vwgt.size());
361 
362  // TODO:[BSK] maybe there is a better weight?
363  _pmetis->vwgt[local_index] = elem->n_nodes();
364 
365  // find the subdomain this element belongs in
366  libmesh_assert (global_index_map.count(elem->id()));
367  const dof_id_type global_index =
368  global_index_map[elem->id()];
369 
370  libmesh_assert_less (global_index, subdomain_bounds.back());
371 
372  const unsigned int subdomain_id =
373  cast_int<unsigned int>
374  (std::distance(subdomain_bounds.begin(),
375  std::lower_bound(subdomain_bounds.begin(),
376  subdomain_bounds.end(),
377  global_index)));
378  libmesh_assert_less (subdomain_id, static_cast<unsigned int>(_pmetis->nparts));
379  libmesh_assert_less (local_index, _pmetis->part.size());
380 
381  _pmetis->part[local_index] = subdomain_id;
382  }
383  }
384 }
std::unique_ptr< ParmetisHelper > _pmetis
std::unordered_map< dof_id_type, dof_id_type > _global_index_by_pid_map
Definition: partitioner.h:272
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:386
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
virtual void _find_global_index_by_pid_map(const MeshBase &mesh)
Definition: partitioner.C:907
long double min(long double a, double b)
std::vector< dof_id_type > _n_active_elem_on_proc
Definition: partitioner.h:281
uint8_t dof_id_type
Definition: id_types.h:64

◆ operator=() [1/2]

ParmetisPartitioner& libMesh::ParmetisPartitioner::operator= ( const ParmetisPartitioner )
delete

This class contains a unique_ptr member, so it can't be default copy assigned.

◆ operator=() [2/2]

ParmetisPartitioner& libMesh::ParmetisPartitioner::operator= ( ParmetisPartitioner &&  )
default

◆ partition() [1/2]

void libMesh::Partitioner::partition ( MeshBase mesh,
const unsigned int  n 
)
virtualinherited

Partitions the MeshBase into n parts by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 57 of file partitioner.C.

References libMesh::Partitioner::_do_partition(), libMesh::MeshTools::libmesh_assert_valid_remote_elems(), mesh, std::min(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), and libMesh::Partitioner::single_partition().

Referenced by _do_repartition(), and libMesh::Partitioner::partition().

59 {
60  libmesh_parallel_only(mesh.comm());
61 
62  // BSK - temporary fix while redistribution is integrated 6/26/2008
63  // Uncomment this to not repartition in parallel
64  // if (!mesh.is_serial())
65  // return;
66 
67  // we cannot partition into more pieces than we have
68  // active elements!
69  const unsigned int n_parts =
70  static_cast<unsigned int>
71  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
72 
73  // Set the number of partitions in the mesh
74  mesh.set_n_partitions()=n_parts;
75 
76  if (n_parts == 1)
77  {
78  this->single_partition (mesh);
79  return;
80  }
81 
82  // First assign a temporary partitioning to any unpartitioned elements
84 
85  // Call the partitioning function
86  this->_do_partition(mesh,n_parts);
87 
88  // Set the parent's processor ids
90 
91  // Redistribute elements if necessary, before setting node processor
92  // ids, to make sure those will be set consistently
93  mesh.redistribute();
94 
95 #ifdef DEBUG
97 
98  // Messed up elem processor_id()s can leave us without the child
99  // elements we need to restrict vectors on a distributed mesh
100  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
101 #endif
102 
103  // Set the node's processor ids
105 
106 #ifdef DEBUG
107  MeshTools::libmesh_assert_valid_procids<Elem>(mesh);
108 #endif
109 
110  // Give derived Mesh classes a chance to update any cached data to
111  // reflect the new partitioning
112  mesh.update_post_partitioning();
113 }
void single_partition(MeshBase &mesh)
Definition: partitioner.C:159
void libmesh_assert_valid_remote_elems(const MeshBase &mesh)
Definition: mesh_tools.C:1247
static void set_node_processor_ids(MeshBase &mesh)
Definition: partitioner.C:679
MeshBase & mesh
virtual void _do_partition(MeshBase &mesh, const unsigned int n)=0
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:187
static void set_parent_processor_ids(MeshBase &mesh)
Definition: partitioner.C:268
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64

◆ partition() [2/2]

void libMesh::Partitioner::partition ( MeshBase mesh)
virtualinherited

Partitions the MeshBase into mesh.n_processors() by setting processor_id() on Nodes and Elems.

Note
If you are implementing a new type of Partitioner, you most likely do not want to override the partition() function, see instead the protected virtual _do_partition() method below. The partition() function is responsible for doing a lot of libmesh-internals-specific setup and finalization before and after the _do_partition() function is called. The only responsibility of the _do_partition() function, on the other hand, is to set the processor IDs of the elements according to a specific partitioning algorithm. See, e.g. MetisPartitioner for an example.

Definition at line 50 of file partitioner.C.

References mesh, and libMesh::Partitioner::partition().

51 {
52  this->partition(mesh,mesh.n_processors());
53 }
MeshBase & mesh
virtual void partition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.C:57

◆ partition_range()

virtual void libMesh::Partitioner::partition_range ( MeshBase ,
MeshBase::element_iterator  ,
MeshBase::element_iterator  ,
const unsigned int   
)
inlinevirtualinherited

Partitions elements in the range (it, end) into n parts. The mesh from which the iterators are created must also be passed in, since it is a parallel object and has other useful information in it.

Although partition_range() is part of the public Partitioner interface, it should not generally be called by applications. Its main purpose is to support the SubdomainPartitioner, which uses it internally to individually partition ranges of elements before combining them into the final partitioning. Most of the time, the protected _do_partition() function is implemented in terms of partition_range() by passing a range which includes all the elements of the Mesh.

Reimplemented in libMesh::CentroidPartitioner, libMesh::SFCPartitioner, libMesh::MappedSubdomainPartitioner, libMesh::LinearPartitioner, and libMesh::MetisPartitioner.

Definition at line 127 of file partitioner.h.

131  { libmesh_not_implemented(); }

◆ partition_unpartitioned_elements() [1/2]

void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh)
staticinherited

These functions assign processor IDs to newly-created elements (in parallel) which are currently assigned to processor 0.

Definition at line 187 of file partitioner.C.

References mesh.

Referenced by libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

188 {
190 }
MeshBase & mesh
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:187

◆ partition_unpartitioned_elements() [2/2]

void libMesh::Partitioner::partition_unpartitioned_elements ( MeshBase mesh,
const unsigned int  n 
)
staticinherited

Definition at line 194 of file partitioner.C.

References libMesh::as_range(), libMesh::MeshTools::create_bounding_box(), end, libMesh::MeshCommunication::find_global_indices(), mesh, and libMesh::MeshTools::n_elem().

196 {
197  MeshBase::element_iterator it = mesh.unpartitioned_elements_begin();
198  const MeshBase::element_iterator end = mesh.unpartitioned_elements_end();
199 
200  const dof_id_type n_unpartitioned_elements = MeshTools::n_elem (it, end);
201 
202  // the unpartitioned elements must exist on all processors. If the range is empty on one
203  // it is empty on all, and we can quit right here.
204  if (!n_unpartitioned_elements)
205  return;
206 
207  // find the target subdomain sizes
208  std::vector<dof_id_type> subdomain_bounds(mesh.n_processors());
209 
210  for (processor_id_type pid=0; pid<mesh.n_processors(); pid++)
211  {
212  dof_id_type tgt_subdomain_size = 0;
213 
214  // watch out for the case that n_subdomains < n_processors
215  if (pid < n_subdomains)
216  {
217  tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
218 
219  if (pid < n_unpartitioned_elements%n_subdomains)
220  tgt_subdomain_size++;
221 
222  }
223 
224  //libMesh::out << "pid, #= " << pid << ", " << tgt_subdomain_size << std::endl;
225  if (pid == 0)
226  subdomain_bounds[0] = tgt_subdomain_size;
227  else
228  subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
229  }
230 
231  libmesh_assert_equal_to (subdomain_bounds.back(), n_unpartitioned_elements);
232 
233  // create the unique mapping for all unpartitioned elements independent of partitioning
234  // determine the global indexing for all the unpartitioned elements
235  std::vector<dof_id_type> global_indices;
236 
237  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.
238  // Only the indices for the elements we pass in are returned in the array.
239  MeshCommunication().find_global_indices (mesh.comm(),
241  global_indices);
242 
243  dof_id_type cnt=0;
244  for (auto & elem : as_range(it, end))
245  {
246  libmesh_assert_less (cnt, global_indices.size());
247  const dof_id_type global_index =
248  global_indices[cnt++];
249 
250  libmesh_assert_less (global_index, subdomain_bounds.back());
251  libmesh_assert_less (global_index, n_unpartitioned_elements);
252 
253  const processor_id_type subdomain_id =
254  cast_int<processor_id_type>
255  (std::distance(subdomain_bounds.begin(),
256  std::upper_bound(subdomain_bounds.begin(),
257  subdomain_bounds.end(),
258  global_index)));
259  libmesh_assert_less (subdomain_id, n_subdomains);
260 
261  elem->processor_id() = subdomain_id;
262  //libMesh::out << "assigning " << global_index << " to " << subdomain_id << std::endl;
263  }
264 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:702
libMesh::BoundingBox create_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:386
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
IterBase * end
SimpleRange< I > as_range(const std::pair< I, I > &p)
Definition: simple_range.h:57
uint8_t dof_id_type
Definition: id_types.h:64

◆ processor_pairs_to_interface_nodes()

void libMesh::Partitioner::processor_pairs_to_interface_nodes ( MeshBase mesh,
std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &  processor_pair_to_nodes 
)
staticinherited

On the partitioning interface, a surface is shared by two and only two processors. Try to find which pair of processors corresponds to which surfaces, and store their nodes.

Definition at line 421 of file partitioner.C.

References libMesh::DofObject::invalid_processor_id, std::max(), mesh, std::min(), and n_nodes.

Referenced by libMesh::Partitioner::set_interface_node_processor_ids_BFS(), libMesh::Partitioner::set_interface_node_processor_ids_linear(), and libMesh::Partitioner::set_interface_node_processor_ids_petscpartitioner().

423 {
424  // This function must be run on all processors at once
425  libmesh_parallel_only(mesh.comm());
426 
427  processor_pair_to_nodes.clear();
428 
429  std::set<dof_id_type> mynodes;
430  std::set<dof_id_type> neighbor_nodes;
431  std::vector<dof_id_type> common_nodes;
432 
433  // Loop over all the active elements
434  for (auto & elem : mesh.active_element_ptr_range())
435  {
436  libmesh_assert(elem);
437 
438  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
439 
440  auto n_nodes = elem->n_nodes();
441 
442  // prepare data for this element
443  mynodes.clear();
444  neighbor_nodes.clear();
445  common_nodes.clear();
446 
447  for (unsigned int inode = 0; inode < n_nodes; inode++)
448  mynodes.insert(elem->node_id(inode));
449 
450  for (auto i : elem->side_index_range())
451  {
452  auto neigh = elem->neighbor_ptr(i);
453  if (neigh && !neigh->is_remote() && neigh->processor_id() != elem->processor_id())
454  {
455  neighbor_nodes.clear();
456  common_nodes.clear();
457  auto neigh_n_nodes = neigh->n_nodes();
458  for (unsigned int inode = 0; inode < neigh_n_nodes; inode++)
459  neighbor_nodes.insert(neigh->node_id(inode));
460 
461  std::set_intersection(mynodes.begin(), mynodes.end(),
462  neighbor_nodes.begin(), neighbor_nodes.end(),
463  std::back_inserter(common_nodes));
464 
465  auto & map_set = processor_pair_to_nodes[std::make_pair(std::min(elem->processor_id(), neigh->processor_id()),
466  std::max(elem->processor_id(), neigh->processor_id()))];
467  for (auto global_node_id : common_nodes)
468  map_set.insert(global_node_id);
469  }
470  }
471  }
472 }
MeshBase & mesh
long double max(long double a, double b)
const dof_id_type n_nodes
Definition: tecplot_io.C:68
static const processor_id_type invalid_processor_id
Definition: dof_object.h:358
long double min(long double a, double b)

◆ repartition() [1/2]

void libMesh::Partitioner::repartition ( MeshBase mesh,
const unsigned int  n 
)
inherited

Repartitions the MeshBase into n parts. (Some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.) The default behavior is to simply call this->partition(mesh,n).

Definition at line 124 of file partitioner.C.

References libMesh::Partitioner::_do_repartition(), mesh, std::min(), libMesh::Partitioner::partition_unpartitioned_elements(), libMesh::Partitioner::set_node_processor_ids(), libMesh::Partitioner::set_parent_processor_ids(), and libMesh::Partitioner::single_partition().

Referenced by libMesh::Partitioner::repartition().

126 {
127  // we cannot partition into more pieces than we have
128  // active elements!
129  const unsigned int n_parts =
130  static_cast<unsigned int>
131  (std::min(mesh.n_active_elem(), static_cast<dof_id_type>(n)));
132 
133  // Set the number of partitions in the mesh
134  mesh.set_n_partitions()=n_parts;
135 
136  if (n_parts == 1)
137  {
138  this->single_partition (mesh);
139  return;
140  }
141 
142  // First assign a temporary partitioning to any unpartitioned elements
144 
145  // Call the partitioning function
146  this->_do_repartition(mesh,n_parts);
147 
148  // Set the parent's processor ids
150 
151  // Set the node's processor ids
153 }
void single_partition(MeshBase &mesh)
Definition: partitioner.C:159
static void set_node_processor_ids(MeshBase &mesh)
Definition: partitioner.C:679
MeshBase & mesh
virtual void _do_repartition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.h:237
static void partition_unpartitioned_elements(MeshBase &mesh)
Definition: partitioner.C:187
static void set_parent_processor_ids(MeshBase &mesh)
Definition: partitioner.C:268
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64

◆ repartition() [2/2]

void libMesh::Partitioner::repartition ( MeshBase mesh)
inherited

Repartitions the MeshBase into mesh.n_processors() parts. This is required since some partitioning algorithms can repartition more efficiently than computing a new partitioning from scratch.

Definition at line 117 of file partitioner.C.

References mesh, and libMesh::Partitioner::repartition().

118 {
119  this->repartition(mesh,mesh.n_processors());
120 }
MeshBase & mesh
void repartition(MeshBase &mesh, const unsigned int n)
Definition: partitioner.C:124

◆ set_interface_node_processor_ids_BFS()

void libMesh::Partitioner::set_interface_node_processor_ids_BFS ( MeshBase mesh)
staticinherited

Nodes on the partitioning interface is clustered into two groups BFS (Breadth First Search)scheme for per pair of processors

Definition at line 498 of file partitioner.C.

References libMesh::MeshTools::build_nodes_to_elem_map(), libMesh::MeshTools::find_nodal_neighbors(), mesh, and libMesh::Partitioner::processor_pairs_to_interface_nodes().

Referenced by libMesh::Partitioner::set_node_processor_ids().

499 {
500  // This function must be run on all processors at once
501  libmesh_parallel_only(mesh.comm());
502 
503  std::map<std::pair<processor_id_type, processor_id_type>, std::set<dof_id_type>> processor_pair_to_nodes;
504 
505  processor_pairs_to_interface_nodes(mesh, processor_pair_to_nodes);
506 
507  std::unordered_map<dof_id_type, std::vector<const Elem *>> nodes_to_elem_map;
508 
509  MeshTools::build_nodes_to_elem_map(mesh, nodes_to_elem_map);
510 
511  std::vector<const Node *> neighbors;
512  std::set<dof_id_type> neighbors_order;
513  std::vector<dof_id_type> common_nodes;
514  std::queue<dof_id_type> nodes_queue;
515  std::set<dof_id_type> visted_nodes;
516 
517  for (auto & pmap : processor_pair_to_nodes)
518  {
519  std::size_t n_own_nodes = pmap.second.size()/2;
520 
521  // Initialize node assignment
522  for (auto it = pmap.second.begin(); it != pmap.second.end(); it++)
523  mesh.node_ref(*it).processor_id() = pmap.first.second;
524 
525  visted_nodes.clear();
526  for (auto it = pmap.second.begin(); it != pmap.second.end(); it++)
527  {
528  mesh.node_ref(*it).processor_id() = pmap.first.second;
529 
530  if (visted_nodes.find(*it) != visted_nodes.end())
531  continue;
532  else
533  {
534  nodes_queue.push(*it);
535  visted_nodes.insert(*it);
536  if (visted_nodes.size() >= n_own_nodes)
537  break;
538  }
539 
540  while (!nodes_queue.empty())
541  {
542  auto & node = mesh.node_ref(nodes_queue.front());
543  nodes_queue.pop();
544 
545  neighbors.clear();
546  MeshTools::find_nodal_neighbors(mesh, node, nodes_to_elem_map, neighbors);
547  neighbors_order.clear();
548  for (auto & neighbor : neighbors)
549  neighbors_order.insert(neighbor->id());
550 
551  common_nodes.clear();
552  std::set_intersection(pmap.second.begin(), pmap.second.end(),
553  neighbors_order.begin(), neighbors_order.end(),
554  std::back_inserter(common_nodes));
555 
556  for (auto c_node : common_nodes)
557  if (visted_nodes.find(c_node) == visted_nodes.end())
558  {
559  nodes_queue.push(c_node);
560  visted_nodes.insert(c_node);
561  if (visted_nodes.size() >= n_own_nodes)
562  goto queue_done;
563  }
564 
565  if (visted_nodes.size() >= n_own_nodes)
566  goto queue_done;
567  }
568  }
569  queue_done:
570  for (auto node : visted_nodes)
571  mesh.node_ref(node).processor_id() = pmap.first.first;
572  }
573 }
MeshBase & mesh
void find_nodal_neighbors(const MeshBase &mesh, const Node &n, const std::vector< std::vector< const Elem *>> &nodes_to_elem_map, std::vector< const Node *> &neighbors)
Definition: mesh_tools.C:740
void build_nodes_to_elem_map(const MeshBase &mesh, std::vector< std::vector< dof_id_type >> &nodes_to_elem_map)
Definition: mesh_tools.C:245
static void processor_pairs_to_interface_nodes(MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
Definition: partitioner.C:421

◆ set_interface_node_processor_ids_linear()

void libMesh::Partitioner::set_interface_node_processor_ids_linear ( MeshBase mesh)
staticinherited

Nodes on the partitioning interface is linearly assigned to each pair of processors

Definition at line 474 of file partitioner.C.

References mesh, and libMesh::Partitioner::processor_pairs_to_interface_nodes().

Referenced by libMesh::Partitioner::set_node_processor_ids().

475 {
476  // This function must be run on all processors at once
477  libmesh_parallel_only(mesh.comm());
478 
479  std::map<std::pair<processor_id_type, processor_id_type>, std::set<dof_id_type>> processor_pair_to_nodes;
480 
481  processor_pairs_to_interface_nodes(mesh, processor_pair_to_nodes);
482 
483  for (auto & pmap : processor_pair_to_nodes)
484  {
485  std::size_t n_own_nodes = pmap.second.size()/2, i = 0;
486 
487  for (auto it = pmap.second.begin(); it != pmap.second.end(); it++, i++)
488  {
489  auto & node = mesh.node_ref(*it);
490  if (i <= n_own_nodes)
491  node.processor_id() = pmap.first.first;
492  else
493  node.processor_id() = pmap.first.second;
494  }
495  }
496 }
MeshBase & mesh
static void processor_pairs_to_interface_nodes(MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
Definition: partitioner.C:421

◆ set_interface_node_processor_ids_petscpartitioner()

void libMesh::Partitioner::set_interface_node_processor_ids_petscpartitioner ( MeshBase mesh)
staticinherited

Nodes on the partitioning interface is partitioned into two groups using a PETSc partitioner for each pair of processors

Definition at line 575 of file partitioner.C.

References libMesh::MeshTools::build_nodes_to_elem_map(), libMesh::MeshTools::find_nodal_neighbors(), libMesh::libmesh_ignore(), mesh, and libMesh::Partitioner::processor_pairs_to_interface_nodes().

Referenced by libMesh::Partitioner::set_node_processor_ids().

576 {
577  libmesh_ignore(mesh); // Only used if LIBMESH_HAVE_PETSC
578 
579  // This function must be run on all processors at once
580  libmesh_parallel_only(mesh.comm());
581 
582 #if LIBMESH_HAVE_PETSC
583  std::map<std::pair<processor_id_type, processor_id_type>, std::set<dof_id_type>> processor_pair_to_nodes;
584 
585  processor_pairs_to_interface_nodes(mesh, processor_pair_to_nodes);
586 
587  std::vector<std::vector<const Elem *>> nodes_to_elem_map;
588 
589  MeshTools::build_nodes_to_elem_map(mesh, nodes_to_elem_map);
590 
591  std::vector<const Node *> neighbors;
592  std::set<dof_id_type> neighbors_order;
593  std::vector<dof_id_type> common_nodes;
594 
595  std::vector<dof_id_type> rows;
596  std::vector<dof_id_type> cols;
597 
598  std::map<dof_id_type, dof_id_type> global_to_local;
599 
600  for (auto & pmap : processor_pair_to_nodes)
601  {
602  unsigned int i = 0;
603 
604  rows.clear();
605  rows.resize(pmap.second.size()+1);
606  cols.clear();
607  for (auto it = pmap.second.begin(); it != pmap.second.end(); it++)
608  global_to_local[*it] = i++;
609 
610  i = 0;
611  for (auto it = pmap.second.begin(); it != pmap.second.end(); it++, i++)
612  {
613  auto & node = mesh.node_ref(*it);
614  neighbors.clear();
615  MeshTools::find_nodal_neighbors(mesh, node, nodes_to_elem_map, neighbors);
616  neighbors_order.clear();
617  for (auto & neighbor : neighbors)
618  neighbors_order.insert(neighbor->id());
619 
620  common_nodes.clear();
621  std::set_intersection(pmap.second.begin(), pmap.second.end(),
622  neighbors_order.begin(), neighbors_order.end(),
623  std::back_inserter(common_nodes));
624 
625  rows[i+1] = rows[i] + cast_int<dof_id_type>(common_nodes.size());
626 
627  for (auto c_node : common_nodes)
628  cols.push_back(global_to_local[c_node]);
629  }
630 
631  Mat adj;
632  MatPartitioning part;
633  IS is;
634  PetscInt local_size, rows_size, cols_size;
635  PetscInt *adj_i, *adj_j;
636  const PetscInt *indices;
637  PetscCalloc1(rows.size(), &adj_i);
638  PetscCalloc1(cols.size(), &adj_j);
639  rows_size = cast_int<PetscInt>(rows.size());
640  for (PetscInt ii=0; ii<rows_size; ii++)
641  adj_i[ii] = rows[ii];
642 
643  cols_size = cast_int<PetscInt>(cols.size());
644  for (PetscInt ii=0; ii<cols_size; ii++)
645  adj_j[ii] = cols[ii];
646 
647  const PetscInt sz = cast_int<PetscInt>(pmap.second.size());
648  MatCreateMPIAdj(PETSC_COMM_SELF, sz, sz, adj_i, adj_j,nullptr,&adj);
649  MatPartitioningCreate(PETSC_COMM_SELF,&part);
650  MatPartitioningSetAdjacency(part,adj);
651  MatPartitioningSetNParts(part,2);
652  PetscObjectSetOptionsPrefix((PetscObject)part, "balance_");
653  MatPartitioningSetFromOptions(part);
654  MatPartitioningApply(part,&is);
655 
656  MatDestroy(&adj);
657  MatPartitioningDestroy(&part);
658 
659  ISGetLocalSize(is, &local_size);
660  ISGetIndices(is, &indices);
661  i = 0;
662  for (auto it = pmap.second.begin(); it != pmap.second.end(); it++, i++)
663  {
664  auto & node = mesh.node_ref(*it);
665  if (indices[i])
666  node.processor_id() = pmap.first.second;
667  else
668  node.processor_id() = pmap.first.first;
669  }
670  ISRestoreIndices(is, &indices);
671  ISDestroy(&is);
672  }
673 #else
674  libmesh_error_msg("PETSc is required");
675 #endif
676 }
MeshBase & mesh
void find_nodal_neighbors(const MeshBase &mesh, const Node &n, const std::vector< std::vector< const Elem *>> &nodes_to_elem_map, std::vector< const Node *> &neighbors)
Definition: mesh_tools.C:740
void build_nodes_to_elem_map(const MeshBase &mesh, std::vector< std::vector< dof_id_type >> &nodes_to_elem_map)
Definition: mesh_tools.C:245
void libmesh_ignore(const Args &...)
static void processor_pairs_to_interface_nodes(MeshBase &mesh, std::map< std::pair< processor_id_type, processor_id_type >, std::set< dof_id_type >> &processor_pair_to_nodes)
Definition: partitioner.C:421

◆ set_node_processor_ids()

void libMesh::Partitioner::set_node_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the nodes. By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 679 of file partitioner.C.

References libMesh::as_range(), libMesh::Node::choose_processor_id(), libMesh::DofObject::invalid_processor_id, mesh, libMesh::MeshTools::n_elem(), libMesh::on_command_line(), libMesh::DofObject::processor_id(), libMesh::Parallel::pull_parallel_vector_data(), libMesh::Partitioner::set_interface_node_processor_ids_BFS(), libMesh::Partitioner::set_interface_node_processor_ids_linear(), and libMesh::Partitioner::set_interface_node_processor_ids_petscpartitioner().

Referenced by libMesh::MeshRefinement::_refine_elements(), libMesh::UnstructuredMesh::all_first_order(), libMesh::Partitioner::partition(), libMesh::XdrIO::read(), libMesh::Partitioner::repartition(), and libMesh::BoundaryInfo::sync().

680 {
681  LOG_SCOPE("set_node_processor_ids()","Partitioner");
682 
683  // This function must be run on all processors at once
684  libmesh_parallel_only(mesh.comm());
685 
686  // If we have any unpartitioned elements at this
687  // stage there is a problem
688  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
689  mesh.unpartitioned_elements_end()) == 0);
690 
691 
692  // const dof_id_type orig_n_local_nodes = mesh.n_local_nodes();
693 
694  // libMesh::err << "[" << mesh.processor_id() << "]: orig_n_local_nodes="
695  // << orig_n_local_nodes << std::endl;
696 
697  // Build up request sets. Each node is currently owned by a processor because
698  // it is connected to an element owned by that processor. However, during the
699  // repartitioning phase that element may have been assigned a new processor id, but
700  // it is still resident on the original processor. We need to know where to look
701  // for new ids before assigning new ids, otherwise we may be asking the wrong processors
702  // for the wrong information.
703  //
704  // The only remaining issue is what to do with unpartitioned nodes. Since they are required
705  // to live on all processors we can simply rely on ourselves to number them properly.
706  std::map<processor_id_type, std::vector<dof_id_type>>
707  requested_node_ids;
708 
709  // Loop over all the nodes, count the ones on each processor. We can skip ourself
710  std::vector<dof_id_type> ghost_nodes_from_proc(mesh.n_processors(), 0);
711 
712  for (auto & node : mesh.node_ptr_range())
713  {
714  libmesh_assert(node);
715  const processor_id_type current_pid = node->processor_id();
716  if (current_pid != mesh.processor_id() &&
717  current_pid != DofObject::invalid_processor_id)
718  {
719  libmesh_assert_less (current_pid, ghost_nodes_from_proc.size());
720  ghost_nodes_from_proc[current_pid]++;
721  }
722  }
723 
724  // We know how many objects live on each processor, so reserve()
725  // space for each.
726  for (processor_id_type pid=0; pid != mesh.n_processors(); ++pid)
727  if (ghost_nodes_from_proc[pid])
728  requested_node_ids[pid].reserve(ghost_nodes_from_proc[pid]);
729 
730  // We need to get the new pid for each node from the processor
731  // which *currently* owns the node. We can safely skip ourself
732  for (auto & node : mesh.node_ptr_range())
733  {
734  libmesh_assert(node);
735  const processor_id_type current_pid = node->processor_id();
736  if (current_pid != mesh.processor_id() &&
737  current_pid != DofObject::invalid_processor_id)
738  {
739  libmesh_assert_less (requested_node_ids[current_pid].size(),
740  ghost_nodes_from_proc[current_pid]);
741  requested_node_ids[current_pid].push_back(node->id());
742  }
743 
744  // Unset any previously-set node processor ids
745  node->invalidate_processor_id();
746  }
747 
748  // Loop over all the active elements
749  for (auto & elem : mesh.active_element_ptr_range())
750  {
751  libmesh_assert(elem);
752 
753  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
754 
755  // Consider updating the processor id on this element's nodes
756  for (unsigned int n=0; n<elem->n_nodes(); ++n)
757  {
758  Node & node = elem->node_ref(n);
759  processor_id_type & pid = node.processor_id();
760  pid = node.choose_processor_id(pid, elem->processor_id());
761  }
762  }
763 
764  bool load_balanced_nodes_linear =
765  libMesh::on_command_line ("--load-balanced-nodes-linear");
766 
767  if (load_balanced_nodes_linear)
769 
770  bool load_balanced_nodes_bfs =
771  libMesh::on_command_line ("--load-balanced-nodes-bfs");
772 
773  if (load_balanced_nodes_bfs)
775 
776  bool load_balanced_nodes_petscpartition =
777  libMesh::on_command_line ("--load_balanced_nodes_petscpartitioner");
778 
779  if (load_balanced_nodes_petscpartition)
781 
782  // And loop over the subactive elements, but don't reassign
783  // nodes that are already active on another processor.
784  for (auto & elem : as_range(mesh.subactive_elements_begin(),
785  mesh.subactive_elements_end()))
786  {
787  libmesh_assert(elem);
788 
789  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
790 
791  for (unsigned int n=0; n<elem->n_nodes(); ++n)
792  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
793  elem->node_ptr(n)->processor_id() = elem->processor_id();
794  }
795 
796  // Same for the inactive elements -- we will have already gotten most of these
797  // nodes, *except* for the case of a parent with a subset of children which are
798  // ghost elements. In that case some of the parent nodes will not have been
799  // properly handled yet
800  for (auto & elem : as_range(mesh.not_active_elements_begin(),
801  mesh.not_active_elements_end()))
802  {
803  libmesh_assert(elem);
804 
805  libmesh_assert_not_equal_to (elem->processor_id(), DofObject::invalid_processor_id);
806 
807  for (unsigned int n=0; n<elem->n_nodes(); ++n)
808  if (elem->node_ptr(n)->processor_id() == DofObject::invalid_processor_id)
809  elem->node_ptr(n)->processor_id() = elem->processor_id();
810  }
811 
812  // We can't assert that all nodes are connected to elements, because
813  // a DistributedMesh with NodeConstraints might have pulled in some
814  // remote nodes solely for evaluating those constraints.
815  // MeshTools::libmesh_assert_connected_nodes(mesh);
816 
817  // For such nodes, we'll do a sanity check later when making sure
818  // that we successfully reset their processor ids to something
819  // valid.
820 
821  auto gather_functor =
822  [& mesh]
823  (processor_id_type, const std::vector<dof_id_type> & ids,
824  std::vector<processor_id_type> & new_pids)
825  {
826  const std::size_t ids_size = ids.size();
827  new_pids.resize(ids_size);
828 
829  // Fill those requests in-place
830  for (std::size_t i=0; i != ids_size; ++i)
831  {
832  Node & node = mesh.node_ref(ids[i]);
833  const processor_id_type new_pid = node.processor_id();
834 
835  // We may have an invalid processor_id() on nodes that have been
836  // "detached" from coarsened-away elements but that have not yet
837  // themselves been removed.
838  // libmesh_assert_not_equal_to (new_pid, DofObject::invalid_processor_id);
839  // libmesh_assert_less (new_pid, mesh.n_partitions()); // this is the correct test --
840  new_pids[i] = new_pid; // the number of partitions may
841  } // not equal the number of processors
842  };
843 
844  auto action_functor =
845  [& mesh]
847  const std::vector<dof_id_type> & ids,
848  const std::vector<processor_id_type> & new_pids)
849  {
850  const std::size_t ids_size = ids.size();
851  // Copy the pid changes we've now been informed of
852  for (std::size_t i=0; i != ids_size; ++i)
853  {
854  Node & node = mesh.node_ref(ids[i]);
855 
856  // this is the correct test -- the number of partitions may
857  // not equal the number of processors
858 
859  // But: we may have an invalid processor_id() on nodes that
860  // have been "detached" from coarsened-away elements but
861  // that have not yet themselves been removed.
862  // libmesh_assert_less (filled_request[i], mesh.n_partitions());
863 
864  node.processor_id(new_pids[i]);
865  }
866  };
867 
868  const processor_id_type * ex = nullptr;
870  (mesh.comm(), requested_node_ids, gather_functor, action_functor, ex);
871 
872 #ifdef DEBUG
873  MeshTools::libmesh_assert_valid_procids<Node>(mesh);
874  //MeshTools::libmesh_assert_canonical_node_procids(mesh);
875 #endif
876 }
static void set_interface_node_processor_ids_petscpartitioner(MeshBase &mesh)
Definition: partitioner.C:575
static void set_interface_node_processor_ids_BFS(MeshBase &mesh)
Definition: partitioner.C:498
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:702
static void set_interface_node_processor_ids_linear(MeshBase &mesh)
Definition: partitioner.C:474
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
void pull_parallel_vector_data(const Communicator &comm, const MapToVectors &queries, RequestContainer &reqs, GatherFunctor &gather_data, ActionFunctor &act_on_data, const datum *example)
static const processor_id_type invalid_processor_id
Definition: dof_object.h:358
SimpleRange< I > as_range(const std::pair< I, I > &p)
Definition: simple_range.h:57
bool on_command_line(std::string arg)
Definition: libmesh.C:876

◆ set_parent_processor_ids()

void libMesh::Partitioner::set_parent_processor_ids ( MeshBase mesh)
staticinherited

This function is called after partitioning to set the processor IDs for the inactive parent elements. A parent's processor ID is the same as its first child.

Definition at line 268 of file partitioner.C.

References libMesh::as_range(), libMesh::Elem::child_ref_range(), libMesh::Partitioner::communication_blocksize, libMesh::DofObject::invalid_processor_id, libMesh::DofObject::invalidate_processor_id(), libMesh::libmesh_ignore(), mesh, std::min(), libMesh::MeshTools::n_elem(), libMesh::Elem::parent(), libMesh::DofObject::processor_id(), and libMesh::Elem::total_family_tree().

Referenced by libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

269 {
270  // Ignore the parameter when !LIBMESH_ENABLE_AMR
272 
273  LOG_SCOPE("set_parent_processor_ids()", "Partitioner");
274 
275 #ifdef LIBMESH_ENABLE_AMR
276 
277  // If the mesh is serial we have access to all the elements,
278  // in particular all the active ones. We can therefore set
279  // the parent processor ids indirectly through their children, and
280  // set the subactive processor ids while examining their active
281  // ancestors.
282  // By convention a parent is assigned to the minimum processor
283  // of all its children, and a subactive is assigned to the processor
284  // of its active ancestor.
285  if (mesh.is_serial())
286  {
287  for (auto & elem : mesh.active_element_ptr_range())
288  {
289  // First set descendents
290  std::vector<const Elem *> subactive_family;
291  elem->total_family_tree(subactive_family);
292  for (std::size_t i = 0; i != subactive_family.size(); ++i)
293  const_cast<Elem *>(subactive_family[i])->processor_id() = elem->processor_id();
294 
295  // Then set ancestors
296  Elem * parent = elem->parent();
297 
298  while (parent)
299  {
300  // invalidate the parent id, otherwise the min below
301  // will not work if the current parent id is less
302  // than all the children!
303  parent->invalidate_processor_id();
304 
305  for (auto & child : parent->child_ref_range())
306  {
307  libmesh_assert(!child.is_remote());
308  libmesh_assert_not_equal_to (child.processor_id(), DofObject::invalid_processor_id);
309  parent->processor_id() = std::min(parent->processor_id(),
310  child.processor_id());
311  }
312  parent = parent->parent();
313  }
314  }
315  }
316 
317  // When the mesh is parallel we cannot guarantee that parents have access to
318  // all their children.
319  else
320  {
321  // Setting subactive processor ids is easy: we can guarantee
322  // that children have access to all their parents.
323 
324  // Loop over all the active elements in the mesh
325  for (auto & child : mesh.active_element_ptr_range())
326  {
327  std::vector<const Elem *> subactive_family;
328  child->total_family_tree(subactive_family);
329  for (std::size_t i = 0; i != subactive_family.size(); ++i)
330  const_cast<Elem *>(subactive_family[i])->processor_id() = child->processor_id();
331  }
332 
333  // When the mesh is parallel we cannot guarantee that parents have access to
334  // all their children.
335 
336  // We will use a brute-force approach here. Each processor finds its parent
337  // elements and sets the parent pid to the minimum of its
338  // semilocal descendants.
339  // A global reduction is then performed to make sure the true minimum is found.
340  // As noted, this is required because we cannot guarantee that a parent has
341  // access to all its children on any single processor.
342  libmesh_parallel_only(mesh.comm());
343  libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
344  mesh.unpartitioned_elements_end()) == 0);
345 
346  const dof_id_type max_elem_id = mesh.max_elem_id();
347 
348  std::vector<processor_id_type>
349  parent_processor_ids (std::min(communication_blocksize,
350  max_elem_id));
351 
352  for (dof_id_type blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
353  {
354  last_elem_id =
355  std::min(static_cast<dof_id_type>((blk+1)*communication_blocksize),
356  max_elem_id);
357  const dof_id_type first_elem_id = blk*communication_blocksize;
358 
359  std::fill (parent_processor_ids.begin(),
360  parent_processor_ids.end(),
362 
363  // first build up local contributions to parent_processor_ids
364  bool have_parent_in_block = false;
365 
366  for (auto & parent : as_range(mesh.ancestor_elements_begin(),
367  mesh.ancestor_elements_end()))
368  {
369  const dof_id_type parent_idx = parent->id();
370  libmesh_assert_less (parent_idx, max_elem_id);
371 
372  if ((parent_idx >= first_elem_id) &&
373  (parent_idx < last_elem_id))
374  {
375  have_parent_in_block = true;
377 
378  std::vector<const Elem *> active_family;
379  parent->active_family_tree(active_family);
380  for (std::size_t i = 0; i != active_family.size(); ++i)
381  parent_pid = std::min (parent_pid, active_family[i]->processor_id());
382 
383  const dof_id_type packed_idx = parent_idx - first_elem_id;
384  libmesh_assert_less (packed_idx, parent_processor_ids.size());
385 
386  parent_processor_ids[packed_idx] = parent_pid;
387  }
388  }
389 
390  // then find the global minimum
391  mesh.comm().min (parent_processor_ids);
392 
393  // and assign the ids, if we have a parent in this block.
394  if (have_parent_in_block)
395  for (auto & parent : as_range(mesh.ancestor_elements_begin(),
396  mesh.ancestor_elements_end()))
397  {
398  const dof_id_type parent_idx = parent->id();
399 
400  if ((parent_idx >= first_elem_id) &&
401  (parent_idx < last_elem_id))
402  {
403  const dof_id_type packed_idx = parent_idx - first_elem_id;
404  libmesh_assert_less (packed_idx, parent_processor_ids.size());
405 
406  const processor_id_type parent_pid =
407  parent_processor_ids[packed_idx];
408 
409  libmesh_assert_not_equal_to (parent_pid, DofObject::invalid_processor_id);
410 
411  parent->processor_id() = parent_pid;
412  }
413  }
414  }
415  }
416 
417 #endif // LIBMESH_ENABLE_AMR
418 }
dof_id_type n_elem(const MeshBase::const_element_iterator &begin, const MeshBase::const_element_iterator &end)
Definition: mesh_tools.C:702
MeshBase & mesh
uint8_t processor_id_type
Definition: id_types.h:99
void libmesh_ignore(const Args &...)
static const processor_id_type invalid_processor_id
Definition: dof_object.h:358
SimpleRange< I > as_range(const std::pair< I, I > &p)
Definition: simple_range.h:57
static const dof_id_type communication_blocksize
Definition: partitioner.h:244
long double min(long double a, double b)
uint8_t dof_id_type
Definition: id_types.h:64

◆ single_partition()

void libMesh::Partitioner::single_partition ( MeshBase mesh)
protectedinherited

Trivially "partitions" the mesh for one processor. Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Definition at line 159 of file partitioner.C.

References libMesh::MeshBase::elements_begin(), mesh, and libMesh::Partitioner::single_partition_range().

Referenced by libMesh::SubdomainPartitioner::_do_partition(), libMesh::Partitioner::partition(), and libMesh::Partitioner::repartition().

160 {
161  this->single_partition_range(mesh.elements_begin(),
162  mesh.elements_end());
163 
164  // Redistribute, in case someone (like our unit tests) is doing
165  // something silly (like moving a whole already-distributed mesh
166  // back onto rank 0).
167  mesh.redistribute();
168 }
MeshBase & mesh
void single_partition_range(MeshBase::element_iterator it, MeshBase::element_iterator end)
Definition: partitioner.C:172

◆ single_partition_range()

void libMesh::Partitioner::single_partition_range ( MeshBase::element_iterator  it,
MeshBase::element_iterator  end 
)
protectedinherited

Slightly generalized version of single_partition which acts on a range of elements defined by the pair of iterators (it, end).

Definition at line 172 of file partitioner.C.

References libMesh::as_range(), and end.

Referenced by libMesh::LinearPartitioner::partition_range(), libMesh::MetisPartitioner::partition_range(), libMesh::MappedSubdomainPartitioner::partition_range(), libMesh::SFCPartitioner::partition_range(), libMesh::CentroidPartitioner::partition_range(), and libMesh::Partitioner::single_partition().

174 {
175  LOG_SCOPE("single_partition_range()", "Partitioner");
176 
177  for (auto & elem : as_range(it, end))
178  {
179  elem->processor_id() = 0;
180 
181  // Assign all this element's nodes to processor 0 as well.
182  for (unsigned int n=0; n<elem->n_nodes(); ++n)
183  elem->node_ptr(n)->processor_id() = 0;
184  }
185 }
IterBase * end
SimpleRange< I > as_range(const std::pair< I, I > &p)
Definition: simple_range.h:57

Member Data Documentation

◆ _dual_graph

std::vector<std::vector<dof_id_type> > libMesh::Partitioner::_dual_graph
protectedinherited

A dual graph corresponds to the mesh, and it is typically used in paritioner. A vertex represents an element, and its neighbors are the element neighbors.

Definition at line 288 of file partitioner.h.

Referenced by libMesh::Partitioner::build_graph().

◆ _global_index_by_pid_map

std::unordered_map<dof_id_type, dof_id_type> libMesh::Partitioner::_global_index_by_pid_map
protectedinherited

Maps active element ids into a contiguous range, as needed by parallel partitioner.

Definition at line 272 of file partitioner.h.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map(), libMesh::Partitioner::assign_partitioning(), and libMesh::Partitioner::build_graph().

◆ _local_id_to_elem

std::vector<Elem *> libMesh::Partitioner::_local_id_to_elem
protectedinherited

Definition at line 291 of file partitioner.h.

Referenced by libMesh::Partitioner::build_graph().

◆ _n_active_elem_on_proc

std::vector<dof_id_type> libMesh::Partitioner::_n_active_elem_on_proc
protectedinherited

The number of active elements on each processor.

Note
ParMETIS requires that each processor have some active elements; it will abort if any processor passes a nullptr _part array.

Definition at line 281 of file partitioner.h.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map(), libMesh::Partitioner::assign_partitioning(), and libMesh::Partitioner::build_graph().

◆ _pmetis

std::unique_ptr<ParmetisHelper> libMesh::ParmetisPartitioner::_pmetis
private

Pointer to the Parmetis-specific data structures. Lets us avoid including parmetis.h here.

Definition at line 122 of file parmetis_partitioner.h.

◆ _weights

ErrorVector* libMesh::Partitioner::_weights
protectedinherited

The weights that might be used for partitioning.

Definition at line 267 of file partitioner.h.

Referenced by libMesh::MetisPartitioner::attach_weights(), and libMesh::MetisPartitioner::partition_range().

◆ communication_blocksize

const dof_id_type libMesh::Partitioner::communication_blocksize = 1000000
staticprotectedinherited

The blocksize to use when doing blocked parallel communication. This limits the maximum vector size which can be used in a single communication step.

Definition at line 244 of file partitioner.h.

Referenced by libMesh::Partitioner::set_parent_processor_ids().


The documentation for this class was generated from the following files: