1 "LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
3 :link(lws,http://lammps.sandia.gov)
5 :link(lc,Section_commands.html#comm)
13 processors Px Py Pz keyword args ... :pre
15 Px,Py,Pz = # of processors in each dimension of 3d grid overlaying the simulation domain :ulb,l
16 zero or more keyword/arg pairs may be appended :l
17 keyword = {grid} or {map} or {part} or {file} :l
18 {grid} arg = gstyle params ...
19 gstyle = {onelevel} or {twolevel} or {numa} or {custom}
20 onelevel params = none
21 twolevel params = Nc Cx Cy Cz
22 Nc = number of cores per node
23 Cx,Cy,Cz = # of cores in each dimension of 3d sub-grid assigned to each node
25 custom params = infile
26 infile = file containing grid layout
27 {map} arg = {cart} or {cart/reorder} or {xyz} or {xzy} or {yxz} or {yzx} or {zxy} or {zyx}
28 cart = use MPI_Cart() methods to map processors to 3d grid with reorder = 0
29 cart/reorder = use MPI_Cart() methods to map processors to 3d grid with reorder = 1
30 xyz,xzy,yxz,yzx,zxy,zyx = map procesors to 3d grid in IJK ordering
32 {part} args = Psend Precv cstyle
33 Psend = partition # (1 to Np) which will send its processor layout
34 Precv = partition # (1 to Np) which will recv the processor layout
36 {multiple} = Psend grid will be multiple of Precv grid in each dimension
38 outfile = name of file to write 3d grid of processors to :pre
45 processors * * 8 map xyz
46 processors * * * grid numa
47 processors * * * grid twolevel 4 * * 1
48 processors 4 8 16 grid custom myfile
49 processors * * * part 1 2 multiple :pre
53 Specify how processors are mapped as a regular 3d grid to the global
54 simulation box. The mapping involves 2 steps. First if there are P
55 processors it means choosing a factorization P = Px by Py by Pz so
56 that there are Px processors in the x dimension, and similarly for the
57 y and z dimensions. Second, the P processors are mapped to the
58 regular 3d grid. The arguments to this command control each of these
61 The Px, Py, Pz parameters affect the factorization. Any of the 3
62 parameters can be specified with an asterisk "*", which means LAMMPS
63 will choose the number of processors in that dimension of the grid.
64 It will do this based on the size and shape of the global simulation
65 box so as to minimize the surface-to-volume ratio of each processor's
68 Choosing explicit values for Px or Py or Pz can be used to override
69 the default manner in which LAMMPS will create the regular 3d grid of
70 processors, if it is known to be sub-optimal for a particular problem.
71 E.g. a problem where the extent of atoms will change dramatically in a
72 particular dimension over the course of the simulation.
74 The product of Px, Py, Pz must equal P, the total # of processors
75 LAMMPS is running on. For a "2d simulation"_dimension.html, Pz must
78 Note that if you run on a prime number of processors P, then a grid
79 such as 1 x P x 1 will be required, which may incur extra
80 communication costs due to the high surface area of each processor's
83 Also note that if multiple partitions are being used then P is the
84 number of processors in this partition; see "this
85 section"_Section_start.html#start_7 for an explanation of the
86 -partition command-line switch. Also note that you can prefix the
87 processors command with the "partition"_partition.html command to
88 easily specify different Px,Py,Pz values for different partitions.
90 You can use the "partition"_partition.html command to specify
91 different processor grids for different partitions, e.g.
93 partition yes 1 processors 4 4 4
94 partition yes 2 processors 2 3 2 :pre
96 NOTE: This command only affects the initial regular 3d grid created
97 when the simulation box is first specified via a
98 "create_box"_create_box.html or "read_data"_read_data.html or
99 "read_restart"_read_restart.html command. Or if the simulation box is
100 re-created via the "replicate"_replicate.html command. The same
101 regular grid is initially created, regardless of which
102 "comm_style"_comm_style.html command is in effect.
104 If load-balancing is never invoked via the "balance"_balance.html or
105 "fix balance"_fix_balance.html commands, then the initial regular grid
106 will persist for all simulations. If balancing is performed, some of
107 the methods invoked by those commands retain the logical toplogy of
108 the initial 3d grid, and the mapping of processors to the grid
109 specified by the processors command. However the grid spacings in
110 different dimensions may change, so that processors own sub-domains of
111 different sizes. If the "comm_style tiled"_comm_style.html command is
112 used, methods invoked by the balancing commands may discard the 3d
113 grid of processors and tile the simulation domain with sub-domains of
114 different sizes and shapes which no longer have a logical 3d
115 connectivity. If that occurs, all the information specified by the
116 processors command is ignored.
120 The {grid} keyword affects the factorization of P into Px,Py,Pz and it
121 can also affect how the P processor IDs are mapped to the 3d grid of
124 The {onelevel} style creates a 3d grid that is compatible with the
125 Px,Py,Pz settings, and which minimizes the surface-to-volume ratio of
126 each processor's sub-domain, as described above. The mapping of
127 processors to the grid is determined by the {map} keyword setting.
129 The {twolevel} style can be used on machines with multicore nodes to
130 minimize off-node communication. It insures that contiguous
131 sub-sections of the 3d grid are assigned to all the cores of a node.
132 For example if {Nc} is 4, then 2x2x1 or 2x1x2 or 1x2x2 sub-sections of
133 the 3d grid will correspond to the cores of each node. This affects
134 both the factorization and mapping steps.
136 The {Cx}, {Cy}, {Cz} settings are similar to the {Px}, {Py}, {Pz}
137 settings, only their product should equal {Nc}. Any of the 3
138 parameters can be specified with an asterisk "*", which means LAMMPS
139 will choose the number of cores in that dimension of the node's
140 sub-grid. As with Px,Py,Pz, it will do this based on the size and
141 shape of the global simulation box so as to minimize the
142 surface-to-volume ratio of each processor's sub-domain.
144 NOTE: For the {twolevel} style to work correctly, it assumes the MPI
145 ranks of processors LAMMPS is running on are ordered by core and then
146 by node. E.g. if you are running on 2 quad-core nodes, for a total of
147 8 processors, then it assumes processors 0,1,2,3 are on node 1, and
148 processors 4,5,6,7 are on node 2. This is the default rank ordering
149 for most MPI implementations, but some MPIs provide options for this
150 ordering, e.g. via environment variable settings.
152 The {numa} style operates similar to the {twolevel} keyword except
153 that it auto-detects which cores are running on which nodes.
154 Currently, it does this in only 2 levels, but it may be extended in
155 the future to account for socket topology and other non-uniform memory
156 access (NUMA) costs. It also uses a different algorithm than the
157 {twolevel} keyword for doing the two-level factorization of the
158 simulation box into a 3d processor grid to minimize off-node
159 communication, and it does its own MPI-based mapping of nodes and
160 cores to the regular 3d grid. Thus it may produce a different layout
161 of the processors than the {twolevel} options.
163 The {numa} style will give an error if the number of MPI processes is
164 not divisible by the number of cores used per node, or any of the Px
165 or Py of Pz values is greater than 1.
167 NOTE: Unlike the {twolevel} style, the {numa} style does not require
168 any particular ordering of MPI ranks i norder to work correctly. This
169 is because it auto-detects which processes are running on which nodes.
171 The {custom} style uses the file {infile} to define both the 3d
172 factorization and the mapping of processors to the grid.
174 The file should have the following format. Any number of initial
175 blank or comment lines (starting with a "#" character) can be present.
176 The first non-blank, non-comment line should have
181 These must be compatible with the total number of processors
182 and the Px, Py, Pz settings of the processors commmand.
184 This line should be immediately followed by
185 P = Px*Py*Pz lines of the form:
189 where ID is a processor ID (from 0 to P-1) and I,J,K are the
190 processors location in the 3d grid. I must be a number from 1 to Px
191 (inclusive) and similarly for J and K. The P lines can be listed in
192 any order, but no processor ID should appear more than once.
196 The {map} keyword affects how the P processor IDs (from 0 to P-1) are
197 mapped to the 3d grid of processors. It is only used by the
198 {onelevel} and {twolevel} grid settings.
200 The {cart} style uses the family of MPI Cartesian functions to perform
201 the mapping, namely MPI_Cart_create(), MPI_Cart_get(),
202 MPI_Cart_shift(), and MPI_Cart_rank(). It invokes the
203 MPI_Cart_create() function with its reorder flag = 0, so that MPI is
204 not free to reorder the processors.
206 The {cart/reorder} style does the same thing as the {cart} style
207 except it sets the reorder flag to 1, so that MPI can reorder
208 processors if it desires.
210 The {xyz}, {xzy}, {yxz}, {yzx}, {zxy}, and {zyx} styles are all
211 similar. If the style is IJK, then it maps the P processors to the
212 grid so that the processor ID in the I direction varies fastest, the
213 processor ID in the J direction varies next fastest, and the processor
214 ID in the K direction varies slowest. For example, if you select
215 style {xyz} and you have a 2x2x2 grid of 8 processors, the assignments
216 of the 8 octants of the simulation domain will be:
218 proc 0 = lo x, lo y, lo z octant
219 proc 1 = hi x, lo y, lo z octant
220 proc 2 = lo x, hi y, lo z octant
221 proc 3 = hi x, hi y, lo z octant
222 proc 4 = lo x, lo y, hi z octant
223 proc 5 = hi x, lo y, hi z octant
224 proc 6 = lo x, hi y, hi z octant
225 proc 7 = hi x, hi y, hi z octant :pre
227 Note that, in principle, an MPI implementation on a particular machine
228 should be aware of both the machine's network topology and the
229 specific subset of processors and nodes that were assigned to your
230 simulation. Thus its MPI_Cart calls can optimize the assignment of
231 MPI processes to the 3d grid to minimize communication costs. In
232 practice, however, few if any MPI implementations actually do this.
233 So it is likely that the {cart} and {cart/reorder} styles simply give
234 the same result as one of the IJK styles.
236 Also note, that for the {twolevel} grid style, the {map} setting is
237 used to first map the nodes to the 3d grid, then again to the cores
238 within each node. For the latter step, the {cart} and {cart/reorder}
239 styles are not supported, so an {xyz} style is used in their place.
243 The {part} keyword affects the factorization of P into Px,Py,Pz.
245 It can be useful when running in multi-partition mode, e.g. with the
246 "run_style verlet/split"_run_style.html command. It specifies a
247 dependency bewteen a sending partition {Psend} and a receiving
248 partition {Precv} which is enforced when each is setting up their own
249 mapping of their processors to the simulation box. Each of {Psend}
250 and {Precv} must be integers from 1 to Np, where Np is the number of
251 partitions you have defined via the "-partition command-line
252 switch"_Section_start.html#start_7.
254 A "dependency" means that the sending partition will create its
255 regular 3d grid as Px by Py by Pz and after it has done this, it will
256 send the Px,Py,Pz values to the receiving partition. The receiving
257 partition will wait to receive these values before creating its own
258 regular 3d grid and will use the sender's Px,Py,Pz values as a
259 constraint. The nature of the constraint is determined by the
262 For a {cstyle} of {multiple}, each dimension of the sender's processor
263 grid is required to be an integer multiple of the corresponding
264 dimension in the receiver's processor grid. This is a requirement of
265 the "run_style verlet/split"_run_style.html command.
267 For example, assume the sending partition creates a 4x6x10 grid = 240
268 processor grid. If the receiving partition is running on 80
269 processors, it could create a 4x2x10 grid, but it will not create a
270 2x4x10 grid, since in the y-dimension, 6 is not an integer multiple of
273 NOTE: If you use the "partition"_partition.html command to invoke
274 different "processsors" commands on different partitions, and you also
275 use the {part} keyword, then you must insure that both the sending and
276 receiving partitions invoke the "processors" command that connects the
277 2 partitions via the {part} keyword. LAMMPS cannot easily check for
278 this, but your simulation will likely hang in its setup phase if this
283 The {file} keyword writes the mapping of the factorization of P
284 processors and their mapping to the 3d grid to the specified file
285 {outfile}. This is useful to check that you assigned physical
286 processors in the manner you desired, which can be tricky to figure
287 out, especially when running on multiple partitions or on, a multicore
288 machine or when the processor ranks were reordered by use of the
289 "-reorder command-line switch"_Section_start.html#start_7 or due to
290 use of MPI-specific launch options such as a config file.
292 If you have multiple partitions you should insure that each one writes
293 to a different file, e.g. using a "world-style variable"_variable.html
294 for the filename. The file has a self-explanatory header, followed by
295 one-line per processor in this format:
297 world-ID universe-ID original-ID: I J K: name
299 The IDs are the processor's rank in this simulation (the world), the
300 universe (of multiple simulations), and the original MPI communicator
301 used to instantiate LAMMPS, respectively. The world and universe IDs
302 will only be different if you are running on more than one partition;
303 see the "-partition command-line switch"_Section_start.html#start_7.
304 The universe and original IDs will only be different if you used the
305 "-reorder command-line switch"_Section_start.html#start_7 to reorder
306 the processors differently than their rank in the original
307 communicator LAMMPS was instantiated with.
309 I,J,K are the indices of the processor in the regular 3d grid, each
310 from 1 to Nd, where Nd is the number of processors in that dimension
313 The {name} is what is returned by a call to MPI_Get_processor_name()
314 and should represent an identifier relevant to the physical processors
315 in your machine. Note that depending on the MPI implementation,
316 multiple cores can have the same {name}.
322 This command cannot be used after the simulation box is defined by a
323 "read_data"_read_data.html or "create_box"_create_box.html command.
324 It can be used before a restart file is read to change the 3d
325 processor grid from what is specified in the restart file.
327 The {grid numa} keyword only currently works with the {map cart}
330 The {part} keyword (for the receiving partition) only works with the
331 {grid onelevel} or {grid twolevel} options.
335 "partition"_partition.html, "-reorder command-line switch"_Section_start.html#start_7
339 The option defaults are Px Py Pz = * * *, grid = onelevel, and map =