Skip to content

Running Fluidity in parallel

Tim Greaves edited this page Feb 22, 2017 · 15 revisions

Introduction

Fluidity is parallelised using MPI and standard domain decomposition techniques: The domain (mesh) is split into partitions and each processor (core) is assigned one partition. The core will solve the problem on its partition. Clearly, during the simulation processors need to exchange information, that is handled by Fluidity in a manner transparent to the user.

The steps necessary to run Fluidity in parallel are as follows:

  1. Decompose the mesh. As mentioned above the mesh must be partitioned. This is done via the scripts fldecomp or flredecomp, which take the mesh file and the number of partitions as input and return the decomposed mesh. See section Decomposing the Mesh below
  2. Choose appropriate matrix solvers, section Parallel Specific Options below gives some advice
  3. Run your simulation. Running in parallel is straight forward on a desktop or laptop but additional scripts may be necessary when using specialised supercomputers. See section Launching Fluidity below

Reviewer's notes:

  • Expand, also have the first timer in mind
  • Update the whole page, some material appears to be obsolete.

Decomposing the Mesh

To decompose the triangle mesh you must use fldecomp or flredecomp. Both tools are part of fltools, please look at fluidity tools if you have not already built and/or installed them. Below we show the basics of fldecomp or flredecomp, please look at fluidity tools for more information.

Decomposing with fldecomp

The 'fldecomp' tool can used as follows:

fldecomp -n [PARTS] [BASENAME]

where BASENAME is the triangle mesh base name (excluding extensions). "-m triangle" instruct fldecomp to perform a triangle-to-triangle decomposition. This will create PARTS partition triangle meshes together with PARTS .halo files. Meshes stored in Binary Gmsh mesh files can also be partitioned using 'fldecomp', see fluidity tools for more information, including more options of the 'fldecomp' tool. Once a partitioned mesh is available, Fluidity can be launched as described in section Launching Fluidity below.

Decomposing with flredecomp

The 'flredecomp' tool differs from the fldecomp tool in a number of ways. flredecomp was developed to facilitate more operations than fldecomp does. The most important function of flredecomp is to re-decompose already partitioned meshes. Thus, having 'flredecomp' operate on flml files (not only mesh files) was practically and conceptually better.

In terms of basic usage, if file 'options.flml' contains the options tree for a simulation, the following command will decompose the set-up into 3 partitions:

mpirun -n3 flredecomp -i 1 -o 3 options options_decomposed

The above will produce a set of three files: 'options_decomposed_0.flml', 'options_decomposed_1.flml', 'options_decomposed_2.flml' as well as the corresponding mesh files. Note that you must run 'flredecomp' in parallel and the number of cores you must use is the largest of input and output partitions. The general usage of flredecomp is :

mpirun -n PROCS flredecomp -i INPUT_PARTS -o OUTPUT_PARTS INPUT OUTPUT

where PROCS is the maximum between INPUT_PARTS and OUTPUT_PARTS. INPUT is the basename of the input flmls and OUTUP is the basename of the output flmls.

Parallel Specific Options

In the options file, select "triangle" under /geometry/mesh/from_file/format for the from_file mesh. For the mesh filename, enter the triangle mesh base name excluding all file and process number extensions.

Also:

  • Remember to select parallel compatible preconditioners in prognostic field solver options. eisenstat is not suitable for parallel simulations.

Launching Fluidity

To launch a new options parallel simulation, add "[OPTIONS FILE]" to the Fluidity command line, e.g.:

mpiexec fluidity -v2 -l [OPTIONS FILE]

Example 1 - Straight run

gormo@rex:~$ cat host_file
rex
rex
rex
rex
mpirun -np 4 --hostfile host_file $PWD/dfluidity tank.flml

Example 2 - running inside gdb

xhost +rex
gormo@rex:~$ echo $DISPLAY
:0.0
mpirun -np 4 -x DISPLAY=:0.0 xterm -e gdb $PWD/dfluidity-debug

To run in a batch job on cx1, using something like the following PBS script:

#!/bin/bash
#Job name
#PBS -N backward_step
# Time required in hh:mm:ss
#PBS -l walltime=48:00:00
# Resource requirements
# Always try to specify exactly what we need and the PBS scheduler
# will make sure to get your job running as quick as possible. If
# you ask for too much you could be waiting a while for sufficient
# resources to become available. Experiment!
#PBS -l select=2:ncpus=4
# Files to contain standard output and standard error
##PBS -o stdout
##PBS -e stderr
PROJECT=backward_facing_step_3d.flml
echo Working directory is $PBS_O_WORKDIR 
cd $PBS_O_WORKDIR
 rm -f stdout* stderr* core*
module load fluidity
# This will put the location of the temporary directory into a temporary file
# in case you need to check it's progress 
mpiexec $PWD/fluidity -v2 -l $PWD/$PROJECT 

This will run on 8 processors (2 * 4 from the line PBS -l select=2:ncpus=4).

Visualising Data

The output from a parallel run is a bunch of .vtu and .pvtu files. A .vtu file is output for each processor and each timestep, e.g. backward_facing_step_3d_191_0.vtu is the .vtu file for step 191 from processor 0. A .pvtu file is generated for each timestep, e.g. backward_facing_step_3d_191.pvtu is for timestep 191.

The best way to view the output is using paraview. Simply open the .ptvu file.

On cx1, you will need to load the paraview module: module load paraview/3.4.0

Clone this wiki locally