Skip to content

Commit b3db01f

Browse files
authored
Merge pull request #31 from wkliao/test_readme
revise README.md
2 parents 0e5175a + bdc3e6b commit b3db01f

18 files changed

+484
-333
lines changed

examples/Pytorch_DDP/README.md

Lines changed: 21 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,27 @@
1-
# Example Python programs that use Pytorch Distributed Data Parallel module
1+
# Example Python programs that use Pytorch DDP module
22

33
This directory contains example python programs that make use of Pytorch
4-
Distributed Data Parallel (DDP) module and MPI to run on multiple MPI processes
5-
in parallel. Detailed information describing the example programs is provided
6-
at the beginning of each file.
4+
Distributed Data Parallel
5+
([DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)) module
6+
and [mpi4pi](https://mpi4py.readthedocs.io/en/stable/) to run on multiple MPI
7+
processes in parallel. Detailed information describing the example programs is
8+
provided at the beginning of each file.
79

8-
## [torch_ddp_skeleton.py](./torch_ddp_skeleton.py) shows how to set up the MPI
9-
and DDP environment to run a program in parallel.
10+
* [torch_ddp_skeleton.py](#torch_ddp_skeleton_py) -- a template for using
11+
Pytorch DDP
1012

11-
Command usage:
12-
```sh
13-
% mpiexec -n 4 python ./torch_ddp_skeleton.py
14-
nprocs = 4 rank = 0 device = cpu
15-
nprocs = 4 rank = 1 device = cpu
16-
nprocs = 4 rank = 2 device = cpu
17-
nprocs = 4 rank = 3 device = cpu
18-
```
13+
---
1914

15+
## torch_ddp_skeleton_py
16+
[torch_ddp_skeleton.py](./torch_ddp_skeleton.py) is a skeleton program showing
17+
how to set up the MPI and DDP environment to run a program in parallel.
18+
19+
* Command usage and output on screen:
20+
```sh
21+
% mpiexec -n 4 python ./torch_ddp_skeleton.py
22+
nprocs = 4 rank = 0 device = cpu
23+
nprocs = 4 rank = 1 device = cpu
24+
nprocs = 4 rank = 2 device = cpu
25+
nprocs = 4 rank = 3 device = cpu
26+
```
2027

examples/README.md

Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# PnetCDF-python examples
2+
3+
This directory contains example python programs that make use of PnetCDF to
4+
perform file I/O. Detailed description of each program and run instructions can
5+
be found at the beginning of each file.
6+
7+
---
8+
### Running individual example programs
9+
10+
* Use command `mpiexec` to run individual programs. For example, command
11+
line below run `collective_write.py` on 4 MPI processes.
12+
```sh
13+
mpiexec -n 4 python collective_write.py [output_dir]
14+
```
15+
* The optional argument `output_dir` enables the testing program to save the
16+
generated output files in the specified directory. Default is the current
17+
directory.
18+
19+
---
20+
### Overview of Test Programs
21+
22+
* [Pytorch_DDP](./Pytorch_DDP)
23+
+ A directory containing examples that make use of Pytorch Distributed Data
24+
Parallel module to run python programs in parallel.
25+
26+
* [collective_write.py](./collective_write.py)
27+
+ writes multiple 3D subarrays to non-record variables of int type using
28+
collective I/O mode.
29+
30+
* [put_vara.py](./put_vara.py)
31+
+ This example shows how to use `Variable` method put_var() to write a 2D
32+
integer array in parallel. The data partitioning pattern is a column-wise
33+
partitioning across all processes.
34+
35+
* [get_vara.py](./get_vara.py)
36+
+ This is the read counterpart of [put_vara.py](./put_vara.py), which shows
37+
how to use to `Variable` method get_var() read a 2D 4-byte integer array in
38+
parallel.
39+
40+
* [nonblocking_write.py](./nonblocking_write.py)
41+
+ Similar to `collective_write.py`, it uses nonblocking APIs instead. It
42+
creates a netcdf file in CDF-5 format and writes a number of 3D integer
43+
non-record variables.
44+
45+
* [nonblocking_write_def.py](./nonblocking_write_def.py)
46+
+ This is the same as `nonblocking_write.py` expect all nonblocking write
47+
requests (calls to `iput` and `bput`) are posted in define mode. It creates
48+
a netcdf file in CDF-5 format and writes a number of 3D integer non-record
49+
variables.
50+
51+
* [create_open.py](./create_open.py)
52+
+ This example shows how to use `File` class constructor to create a netCDF
53+
file and to open the file for read only.
54+
55+
* [ghost_cell.py](./ghost_cell.py)
56+
+ This example shows how to use `Variable` method to write a 2D array user
57+
buffer with ghost cells.
58+
59+
* [fill_mode.py](./fill_mode.py)
60+
+ This example shows how to use `Variable` class methods and `File` class
61+
methods to set the fill mode of variables and fill values.
62+
* `set_fill()` to enable fill mode of the file
63+
* `def_fill()` to enable fill mode and define the variable's fill value
64+
* `inq_var_fill()` to inquire the variable's fill mode information
65+
* `put_vara_all()` to write two 2D 4-byte integer array in parallel.
66+
67+
* [global_attribute.py](./global_attribute.py)
68+
+ This example shows how to use `File` method `put_att()` to write a global
69+
attribute to a file.
70+
71+
* [flexible_api.py](./flexible_api.py)
72+
+ This example shows how to use `Variable` flexible API methods put_var() and
73+
iput_var() to write a 2D 4-byte integer array in parallel.
74+
75+
* [hints.py](./hints.py)
76+
+ This example sets two PnetCDF hints: `nc_header_align_size` and
77+
`nc_var_align_size` and prints the hint values as well as the header size,
78+
header extent, and two variables' starting file offsets.
79+
80+
* [transpose2D.py](./transpose2D.py)
81+
+ This example shows how to use `Variable` method `put_var()` to write a 2D
82+
integer array variable into a file. The variable in the file is a
83+
dimensional transposed array from the one stored in memory.
84+
85+
* [get_info.py](./get_info.py)
86+
+ This example prints all MPI-IO hints used.
87+
88+
* [put_varn_int.py](./put_varn_int.py)
89+
+ This example shows how to use a single call of `Variable` method
90+
`put_var()` to to write a sequence of requests with arbitrary array indices
91+
and lengths.
92+
93+
* [transpose.py](./transpose.py)
94+
+ This example shows how to use `Variable` method `put_var()` to write six 3D
95+
integer array variables into a file. Each variable in the file is a
96+
dimensional transposed array from the one stored in memory. In memory, a 3D
97+
array is partitioned among all processes in a block-block-block fashion and
98+
in ZYX (i.e. C) order. The dimension structures of the transposed six
99+
arrays are
100+
* int ZYX_var(Z, Y, X) ; ZYX -> ZYX
101+
* int ZXY_var(Z, X, Y) ; ZYX -> ZXY
102+
* int YZX_var(Y, Z, X) ; ZYX -> YZX
103+
* int YXZ_var(Y, X, Z) ; ZYX -> YXZ
104+
* int XZY_var(X, Z, Y) ; ZYX -> XZY
105+
* int XYZ_var(X, Y, Z) ; ZYX -> XYZ
106+

examples/collective_write.py

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,21 @@
44
#
55

66
"""
7-
This example mimics the coll_perf.c from ROMIO. It creates a netcdf file and
8-
writes a number of 3D integer non-record variables. The measured write bandwidth
9-
is reported at the end.
10-
To run:
11-
% mpiexec -n num_process python3 collective_write.py [test_file_name] [-l len]
7+
This example mimics the coll_perf.c from ROMIO. It creates a netcdf file and
8+
writes a number of 3D integer non-record variables. The measured write
9+
bandwidth is reported at the end.
10+
To run:
11+
% mpiexec -n num_process python3 collective_write.py [test_file_name] [-l len]
1212
where len decides the size of each local array, which is len x len x len.
1313
So, each non-record variable is of size len*len*len * nprocs * sizeof(int)
14-
All variables are partitioned among all processes in a 3D block-block-block
14+
All variables are partitioned among all processes in a 3D block-block-block
1515
fashion.
16-
Example commands for MPI run and outputs from running ncmpidump on the
17-
netCDF file produced by this example program:
16+
17+
Example commands for MPI run and outputs from running ncmpidump on the
18+
netCDF file produced by this example program:
1819
% mpiexec -n 32 python3 collective_write.py tmp/test1.nc -l 100
1920
% ncmpidump tmp/test1.nc
20-
21+
2122
Example standard output:
2223
MPI hint: cb_nodes = 2
2324
MPI hint: cb_buffer_size = 16777216

examples/create_open.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,17 @@
44
#
55

66
"""
7-
This example shows how to use `File` class constructor to create a netCDF file and to
8-
open the file for read only.
7+
This example shows how to use `File` class constructor to create a netCDF file
8+
and to open the file for read only.
99
10-
Example commands for MPI run and outputs from running ncmpidump on the
11-
netCDF file produced by this example program:
12-
% mpiexec -n 4 python3 create_open.py /tmp/test1.nc
13-
% ncmpidump /tmp/test1.nc
14-
netcdf test1 {
15-
// file format: CDF-1
16-
}
10+
Example commands for MPI run and outputs from running ncmpidump on the
11+
netCDF file produced by this example program:
1712
13+
% mpiexec -n 4 python3 create_open.py /tmp/test1.nc
14+
% ncmpidump /tmp/test1.nc
15+
netcdf test1 {
16+
// file format: CDF-1
17+
}
1818
"""
1919

2020
import sys

examples/fill_mode.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,18 @@
44
#
55

66
"""
7-
This example shows how to use `Variable` class methods and `File` class methods
8-
to set the fill mode of variables and fill values.
7+
This example shows how to use `Variable` class methods and `File` class methods
8+
to set the fill mode of variables and fill values.
99
* 1. set_fill() to enable fill mode of the file
1010
* 2. def_fill() to enable fill mode and define the variable's fill value
1111
* 3. inq_var_fill() to inquire the variable's fill mode information
1212
* 4. put_vara_all() to write two 2D 4-byte integer array in parallel.
1313
14+
Example commands for MPI run and outputs from running ncmpidump on the
15+
netCDF file produced by this example program:
1416
15-
Example commands for MPI run and outputs from running ncmpidump on the
16-
netCDF file produced by this example program:
17-
% mpiexec -n 4 python3 fill_mode.py tmp/test1.nc
18-
% ncmpidump tmp/test1.nc
17+
% mpiexec -n 4 python3 fill_mode.py tmp/test1.nc
18+
% ncmpidump tmp/test1.nc
1919
netcdf test1 {
2020
// file format: CDF-1
2121
dimensions:
@@ -80,7 +80,7 @@ def main():
8080
parser.add_argument("dir", nargs="?", type=str, help="(Optional) output netCDF file name",\
8181
default = "testfile.nc")
8282
parser.add_argument("-q", help="Quiet mode (reports when fail)", action="store_true")
83-
83+
8484
args = parser.parse_args()
8585
if args.q:
8686
verbose = False

examples/flexible_api.py

Lines changed: 22 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -4,28 +4,27 @@
44
#
55

66
"""
7-
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
8-
This example shows how to use `Variable` method put_var() and iput_var() to write a 2D 4-byte
9-
integer array in parallel (one is of 4-byte
10-
integer byte and the other float type) in parallel. It first defines 2 netCDF
11-
variables of sizes
7+
This example shows how to use `Variable` flexible API methods put_var() and
8+
iput_var() to write a 2D 4-byte integer array in parallel (one is of 4-byte
9+
integer byte and the other float type). It first defines 2 netCDF variables of
10+
sizes
1211
var_zy: NZ*nprocs x NY
1312
var_yx: NY x NX*nprocs
14-
15-
The data partitioning patterns on the 2 variables are row-wise and
16-
column-wise, respectively. Each process writes a subarray of size
17-
NZ x NY and NY x NX to var_zy and var_yx, respectively.
18-
Both local buffers have a ghost cell of length 3 surrounded along each
19-
dimension.
20-
To run:
21-
% mpiexec -n num_process python3 flexible_api.py [test_file_name]
22-
23-
Example commands for MPI run and outputs from running ncmpidump on the
24-
output netCDF file produced by this example program:
25-
26-
% mpiexec -n 4 python3 flexible_api.py /tmp/test1.nc
27-
28-
% ncmpidump /tmp/test1.nc
13+
14+
The data partitioning patterns on the 2 variables are row-wise and column-wise,
15+
respectively. Each process writes a subarray of size NZ x NY and NY x NX to
16+
var_zy and var_yx, respectively. Both local buffers have a ghost cell of
17+
length 3 surrounded along each dimension.
18+
19+
To run:
20+
% mpiexec -n num_process python3 flexible_api.py [test_file_name]
21+
22+
Example commands for MPI run and outputs from running ncmpidump on the
23+
output netCDF file produced by this example program:
24+
25+
% mpiexec -n 4 python3 flexible_api.py /tmp/test1.nc
26+
27+
% ncmpidump /tmp/test1.nc
2928
netcdf testfile {
3029
// file format: CDF-5 (big variables)
3130
dimensions:
@@ -36,7 +35,7 @@
3635
int var_zy(Z, Y) ;
3736
float var_yx(Y, X) ;
3837
data:
39-
38+
4039
var_zy =
4140
0, 0, 0, 0, 0,
4241
0, 0, 0, 0, 0,
@@ -58,15 +57,14 @@
5857
3, 3, 3, 3, 3,
5958
3, 3, 3, 3, 3,
6059
3, 3, 3, 3, 3 ;
61-
60+
6261
var_yx =
6362
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3,
6463
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3,
6564
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3,
6665
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3,
6766
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3 ;
6867
}
69-
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
7068
"""
7169

7270
import sys
@@ -167,7 +165,7 @@ def main():
167165
buf_zy.fill(-1)
168166
var_zy.get_var_all(buf_zy, start = starts, count = counts, bufcount = 1, buftype = subarray)
169167
# print(buf_zy.reshape(array_of_sizes))
170-
168+
171169
# check contents of the get buffer
172170
for i in range(array_of_sizes[0]):
173171
for j in range(array_of_sizes[1]):

examples/get_info.py

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,10 @@
44
#
55

66
"""
7+
This example prints all MPI-IO hints used.
78
8-
Example commands for MPI run and outputs from running ncmpidump on the
9-
netCDF file produced by this example program:
9+
Example commands for MPI run and outputs from running ncmpidump on the
10+
netCDF file produced by this example program:
1011
% mpiexec -n 4 python3 get_info.py tmp/test1.nc
1112
% ncmpidump tmp/test1.nc
1213
Example standard output:
@@ -29,7 +30,7 @@
2930
MPI File Info: [15] key = nc_header_align_size, value = 512
3031
MPI File Info: [16] key = nc_var_align_size, value = 512
3132
MPI File Info: [17] key = nc_header_read_chunk_size, value = 0
32-
33+
3334
"""
3435

3536
import sys
@@ -66,10 +67,10 @@ def print_info(info_used):
6667
value = info_used.Get(key)
6768
print("MPI File Info: [{:2d}] key = {:25s}, value = {}".format(i, key, value))
6869

69-
70+
7071
def main():
7172
nprocs = size
72-
73+
7374
global verbose
7475
if parse_help():
7576
MPI.Finalize()
@@ -80,7 +81,7 @@ def main():
8081
parser.add_argument("dir", nargs="?", type=str, help="(Optional) output netCDF file name",\
8182
default = "testfile.nc")
8283
parser.add_argument("-q", help="Quiet mode (reports when fail)", action="store_true")
83-
84+
8485
args = parser.parse_args()
8586
if args.q:
8687
verbose = False

0 commit comments

Comments
 (0)