Tag Archives: beamnrc

X Forwarding with Cygwin

I used the EGSnrc/BEAMnrc/DOSXYZnrc Monte Carlo simulation codes during my doctoral and post-doctoral work at QUT. This involved the use of QUT’s High Performance Computing (HPC) facilities. I’ve previously written about setting up EGSnrc on that system here.

A number of tools written for use with BEAMnrc and DOSXYZnrc only work in a Unix environment, DOSXYZ_SHOW for example, which allows the visualisation of simulation geometries and dose distributions. QUT would provide  (at the time, at least) the X-Win32 software to allow such graphical user interface applications to be viewed remotely from a Windows PC. Here I discuss how I used Cygwin/X for X Forwarding over SSH, for reference.

X Forwarding over SSH using Cygwin/X.

X Forwarding over SSH using Cygwin/X.

The first step is to install Cygwin, available here. During the installation you will need to specify that you want the “openssh” and “xinit” packages installed. Once complete, a shortcut to the XWin Server will appear in your start menu. When run, this will open an X terminal on your local PC (like the one seen above). A connection to HPC should be established with:

$ ssh -X user@lyra.qut.edu.au

Once connected, you will be able to run applications will graphical user interfaces, such as DOSXYZ_SHOW.

EGSnrc on QUT HPC

High performance computing (HPC) facilities are very useful for Monte Carlo simulations – the simulation of particle transport is an application ideal for parallel processing. This post details discusses how the HPC facilities at the Queensland University of Technology (QUT) can be used for EGSnrc simulations by students and staff. The High Performance Computing and Research Support Group have provided a number of user guides that detail how to access HPC resources such as the the network file system and the job submission queues. Depending on your UNIX experience, I’d also recommend familiarising yourself with a text editor such as VIM, for example, with this guide.

Setting up EGSnrc

Depending on your application for HPC resources, your account may already have EGSnrc installed. If not, you can install the EGSnrc and BEAMnrc user codes in your home directory using the following commands (where v4-2-3-2 is the latest version):

> cp /pkg/suse11/egs/v4-2-3-2/NewUser/profile_beam /home/$USER/.profile_beam
> echo “source .profile_beam” >> /home/$USER/.profile
> source ~/.profile_beam
> /pkg/suse11/egs/v4-2-3-2/scripts/finalize_egs_foruser
> /pkg/suse11/egs/v4-2-3-2/scripts/finalize_beam_foruser

These commands may prompt you to enter an installation directory, which should be

/home/$USER/egsnrc/

The first thing to do for a new EGSnrc installation should be copying over any existing files, with accelerator module specifications placed in:

/home/$USER/egsnrc/beamnrc/spec_modules/

and existing PEGS databases in:

/home/$USER/egsnrc/pegs4/data/

Each of the accelerator module specifications need to ‘built’, using

> beam_build.exe module

where module is the name of the accelerator module specification file (without an extension). Each time beam_build.exe is executed, a new directory will be created in the EGSnrc user folder, with a name like

/home/$USER/egsnrc/BEAM_WRO_Varian_21iX_10X

For each directory created, you should compile the accelerator both as an executable and a library. For the example mentioned, you would type:

> cd ~/egsnrc/BEAM_WRO_Varian_21iX_10X
> make && make library

The DOSXYZnrc user code will have already compiled during installation, but will need to be recompiled to support larger phantoms. The maximum allowable phantom size is specified in the file

/home/$USER/egsnrc/dosxyznrc/dosxyznrc_user_macros.mortran

This file should be edited for larger $IMAX, $JMAX and $KMAX values, such as:

REPLACE {$IMAX} WITH {256} "Maximum number of x cells
REPLACE {$JMAX} WITH {256} "Maximum number of y cells
REPLACE {$KMAX} WITH {256} "Maximum number of z cells

Once this is done DOSXYZnrc should be recompiled using the commands:

> cd ~/egsnrc/dosxyznrc
> make

Job Submission

Before submitting a job, you must decide what queue it is placed in. The following queues are available:

  • pbs_short – recommended if runtimes will be shorter than 10 hours.
  • pbs_medium – recommended if runtimes will be shorter than 100 hours.
  • pbs_long – to be otherwise used.

Jobs are submitted using the EXB script distributed with EGSNRC. Usage is:

> exb user_code input_file pegs_file batch=batch_system p=N

Where batch_system is the queue to be used, and N is the number of parts to split the job into (I suggest 10). To simulate the BEAM_WRO_Varian_21iX_10X accelerator, using the input file beam1.egsinp with the default materials database 700icru, in the short queue, in 10 parts, you would type:

> exb BEAM_WRO_Varian_21iX_10X beam1 700icru batch=pbs_short p=10

You can check whether the job was submitted successfully using qusers, or

> qstat –u $USER

The EXB command will result in the production of multiple egsrun_* links in the user code directory. These are links to directories on the work nodes. For example, for folder

egsrun_10677_Spinal_Plan_PHSP_f1b12p1_cl1n034

links to a directory on the work node 34 on cluster 1. To access the contents of this directory (to check how many histories have been simulated, or find the problem with a simulation, you need to ssh into the node. For the above example you would type:

ssh cl1n034

Successful BEAMnrc runs result in the production of multiple phase space files. These need to be combined using the addphsp command. For the exb example above, you would type:

addphsp beam1 beam1 10

Successful DOSXYZnrc runs will automatically add the resultant pardose files together to produce a 3ddose file.