程序代写代做代考 gui c++ FTP Fortran interpreter compiler H64ACE HowTo-Guide

H64ACE HowTo-Guide

The H64ACE Project Cluster

The University of Nottingham
George Green Institute
for Electromagnetics Research

http://www.nottingham.ac.uk/ggiemr

P a g e | 2

APPLIED COMPUTATIONAL ENGINEERING – H64ACE

1. Introduction

This guide provides an introduction to the department’s cluster used in the Applied Computational
Engineering module, H64ACE. This is not intended to be a comprehensive document on the
functionality of the cluster and the software installed on it, but it should provide sufficient
information for you to access the cluster and to compile and run serial and parallel jobs.

If you have any suggestions regards the contents of this guide, please contact the author

steve.greedy@nottingham.ac.uk.

P a g e | 3

2. Cluster Overview

The H64ACE cluster consists of a 12U rack housing the head node and 6 compute nodes connected
via gigabit ethernet. Users login to the head node and all computation takes place on the compute
nodes.

2.1 Hardware Overview

The following figure illustrates the physical layout of the cluster in the server rack.

Figure 2.1. Schematic illustration of the cluster.

The frontend, or head node, is the machine the user logs into to compile and submit jobs to be
executed on the compute nodes. The head node’s specification is:

 2U Supermicro Server platform.

 CPU – Dual Harpertown E5405 2GHz quad core.

 Memory – 8GB DDR2-667.

 Storage – 500GB mirrored array.

The compute nodes that carry out the computation are comp0 to comp05 in figure 2.1. and their
specifications are:

 CPU – Dual Nehalem E5520 2.26GHz quad core.

 Memory – 12GB DDR3-133.

 Storage – 250GB SATA.

P a g e | 4

The head node and compute nodes are linked via gigabit ethernet and a HP Procurve 1400-25
switch.

2.2 Software Overview

The following provides a brief overview of the software installed on the cluster.

2.2.1 Operating System

The cluster runs the Linux operating system, currently Scientific Linux 5.3

[1]
, which is installed on

both the head node and all the compute nodes.

2.2.2 Compilers

For serial computation the free GNU compilers, C (gcc, version 4.1.2), C++ (g++, version 4.1.2) ,
fortran77 (g77, version 3.4.6) and fortran95 (f95, version 4.1.2) are installed.

For parallel computation within the scope of this module the cluster supports both OpenMP and
MPI, the latter being the MPICH2

[2]
implementation.

NOTE: Although the head node and compute nodes have a reasonable amount of storage, large
amounts of user data should not be stored on the cluster. Each user should also ensure that
backups are made of any data stored on the cluster. It is the responsibility of each user to ensure
that their work is backed up.

P a g e | 5

3. Getting Started

You will need an account before you can login to the cluster. Accounts will have been setup for all
students registered on the H64ACE module, usernames and passwords can be obtained from the
cluster’s system administrator. Access to the cluster is done through ssh, secure shell, which
provides a secure channel of communication between you and the remote cluster. Access via secure
shell is covered in the following section.

Once logged in you will be provided with a user interface in the form of command line interpreter, or
shell. The commands supported are shell dependant and the default shell installed on the cluster is
bash. See appendix A, for a list of common commands.

3.1 Logging in to the Cluster from a Windows Machine

Access to the cluster is via secure shell, so you will need a telnet and SSH client such as PuTTY

[3]
.

PuTTY is recommended and its use from within windows will be assumed in the following. Download
putty.exe and save it to your desktop. The program can be found at:

http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Double click the executable (you may be asked to if you want to run this program as the publisher
could not be verified, click yes) and you will be presented with the PuTTY configuration window,
figure 3.1.

Figure 3.1. PuTTY Configuration window.

In the Host Name field type ‘jenna.eee.nottingham.ac.uk’. In the saved sessions field type ‘jenna’
and click save. jenna will now appear in the saved sessions list. A connection to the project server
can now be established by double clicking on the session name or by selecting the session name and
clicking on load. Either will bring up a terminal window or shell.

http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

P a g e | 6

The first time you try to connect to a remote machine you’ll be presented with a security alert, click
yes to proceed and enter your username and password which should bring you to the command
prompt of the Linux environment, figure 3.2.

Figure 3.2. Terminal window and the bash shell.

You will now be in the root of your home directory and you can navigate folders, compile and run
programs by issuing the necessary commands. The compilation and execution of programs is
covered in section 4.

The cluster is intended only for the compilation and execution of codes. It is therefore expected that
codes would have been developed and tested on a local machine prior to compilation and execution
on the cluster. In order to compile codes they need to be transferred from your local machine to the
cluster and subsequently any resulting output requires transferring from the cluster to your local
machine. The next section covers file transfer to and from the cluster.

3.2 Logging in to the Cluster from a Linux Machine

Open up a terminal, in Ubuntu this can be found in ‘accessories’ under the ‘applications’
menu. At the command prompt type:

>ssh yourUserName@jenna.eee.nottingham.ac.uk

You’ll then be prompted for your password.

P a g e | 7

3.3 File Transfer

File transfer is also performed over a secure channel of communication and therefore a
client supporting secure file transfer protocol, SFTP, is required. FileZilla[4], which is available
for a number of platforms is recommended. The following cover the installation, set-up and basic
use of FileZilla on a Windows based PC.

Download the latest FileZilla client for Windows from:

http://filezilla-project.org

Browse to the file you have downloaded and run the setup to install FileZilla. Chose to run the
program on completion of the installation, or locate the shortcut in your start menu and run it from
there. You’ll then be presented with the FileZilla gui, figure 3.3.

Figure 3.3 FileZilla GUI.

1

2 3

P a g e | 8

The 3 main areas of interest are; (1) the server details, (2) the local machine file browser and (3) the
remote machine file browser. The latter is empty as a remote machine hasn’t yet been specified.

To connect to the cluster enter the address firefly.eee.nottingham.ac.uk in the address box, your
username and password in the next two boxes and then enter 22 as the port and click quick connect.
The remote site file browser should now list the files in your home directory (if there are any), figure
3.4.

Figure 3.4 FileZilla Connected to the Remote Cluster.

The upper half of the file browser displays the directory structure or tree on the local (2) and remote
machines (3), the lower half displays the contents of any folder selected. To transfer files to the
remote machine; select the destination folder on the remote machine by clicking on it in the remote
directory tree, locate the file on the local machine and double click the file. The file will then be
automatically transferred. Likewise to transfer from the remote to the local machine; select the
destination folder in the local directory tree, locate the file on the remote machine and double click
it. Again it will automatically be transferred. You can also drag and drop files from one side to the
other as you would in Windows explorer.

2 3

4

P a g e | 9

Right clicking in the lower half of the file browser brings up a context menu that allows you to create
directories, amongst other things, without having to login to the cluster. It is good practice, from an
organisational point of view, to create a new folder for every project or program you create.

3.4 Using an X Window System

As mentioned in the previous section, any significant code development should have been carried

out on a local machine before being transferred to the cluster for computation and execution.

However, for relatively small applications and simple debugging it is useful to be able to make use of

applications (X clients) that run on the cluster but whose X windows GUI is forwarded to your local

machine. For the purposes of this module it provides a relatively way of editing files directly on the

cluster and saving them without the need for further file transfer.

3.4.1 From a Windows Machine

Windows has no native support for X windows, however third party solutions do exist. The one

considered here is Xming
[5]

. The following will assume that the installation of PuTTY as described in

section 3.1 has been completed.

Download the Xming installer from:

http://sourceforge.net/projects/xming/files/Xming/

The latest, public domain, version is 6.9.0.31. Once downloaded run the installer and accept al the

default options ones the installer completes click finish to launch Xming and an icon should

appear in the system tray. This shows that the X server is running. If you are asked to allow or

unblock the service by your firewall software, please do so.

To establish a connection to the cluster locate the Xming folder in your start menu and click to

expand and select XLaunch. Leave multiple windows selected an change the display number to 1,

figure 3.5. Click next. On the following page select start a program, figure 3.6. Click next.

On the next page, figure 3.7, select Run Remote, Using PuTTY (plink.exe). Enter the cluster remote

address and enter your username and password.

Click next, then next on the following window. Click save configuration (I choose not to save

password) and you will be prompted for an xlaunch configuration file name and location. Enter

jenna.xlaunch as the file name and choose your desktop as the location. Click save and then finish. A

terminal window should then appear.

To test all is ok type ‘xeyes &’ at the command prompt and return, then ‘xclock &’ and then ‘gedit &’.

Your desktop should now be similar to figure 3.8.

P a g e | 10

Figure 3.5 Display settings.

Figure 3.6 Session type.

P a g e | 11

Figure 3.7 Start program (PuTTY) configuration.

Figure 3.8 Xming running on a windows desktop, running X applications (xeyes, xclock and gedit).

P a g e | 12

You can now close individual applications by clicking the close window button or all at one by ending

your session by closing the terminal window.

To relaunch your X session double click the ‘jenna.xlaunch’ icon. gedit can be used to edit your

program files and save them directly into your user space on the cluster. Launching gedit with the ‘&’

option launches it in the background allowing you to type other commands in the terminal e.g. to

compile the file you have just edited and saved, whilst keeping gedit open.

3.4.2 From a Linux Machine

Open up a terminal and login to the cluster as described in section 3.2 . Then you need to redirect

the remote machine’s display to you local machine.

>ssh -X yourUserName@jenna.eee.nottingham.ac.uk

This will then open a terminal connected to the cluster and any X applications GUIs will be displayed

on your desktop.

P a g e | 13

4. Compiling Your Application

Compilation and execution of application is covered in the following sections. Throughout the
following the ‘>’ denotes the command prompt, which may or may not be the same on your
terminal, you do not need to type this character.

4.1 Serial Applications

To compile an application written in C called ‘myprog_serial.c’, make sure to change to the folder the
source code is in and issue the following command line

>gcc -o myprog_executable myprog_serial.c

where ‘myprog_executable’ is the name of the executable to be generated. To compile an
application written in C++ called myprog_serial.cpp

>g++ -o myprog_executable myprog_serial.cpp

Each will create an executable file ‘myprog_executable’ that can be run from the command line by
typing

>./myprog_executable

The ‘./’ tells the shell that the executable is in the current folder

4.2 OpenMP Applications

To compile an OpenMP application the the command line instructions are as above but with the

addition of the option ‘-fopenmp’ to the command line. So for a application written in C

>gcc -o myprog_executable myprog_serial.c -fopenmp

or to compile an application written in C++ called myprog_serial.cpp

>g++ -o myprog_executable myprog_serial.cpp -fopenmp

The executable than then be run in the same manner by typing it’s name at the command line
preceded with ‘./’.

4.3 MPI Programs

There are two key elements to running an MPI application. First you need to specify the machines

your application will run on, this is done by creating a hosts file that lists the machines your

application will use. The root folder in your home directory contains a host file called mpd.hosts and

you should refer to this file when running your programs. For simplicity you can copy this file using

P a g e | 14

the ‘cp’ command to the same folder as your source code is. Initially the contents of the mpd.hosts

file will be similar to:

comp00:08

comp01:08

This is specifying that nodes; comp00 and comp01 will be available to your application and that each
node can support 8 processes, the number of cores in each compute node. Therefore your MPI
application could make use of up to 16 processing cores.

The cluster has a total of 48 cores that you will eventually make use of. But for now use the
mpd.hosts file in your root folder. This will help manage the load on the cluster as each of you will
initially access a different group of nodes.

The next step is to load the MPICH2 environment and initialise each node to allow your MPI
application to run, this step needs to be done only once at each login. To load the MPICH2
environment:

>module load mpich2

To initialise each node to allow mpi application to run on them

>mpdboot -n 3 -f mpd.hosts

where ‘-n’ specifies the number of compute nodes to be used plus the headnode and ‘-f’ specifies

the hosts file.

You can now compile and execute your code. To compile the code navigate to the folder where your

source code is and, for the C programming language

>mpicc -o myprog_executable myprog_mpi.c

or for the C++ programming language:

>mpicxx -o myprog_executable myprog_mpi.cpp

The executable can then be run by issuing the following command line

>mpirun -machinefile mpd.hosts -np X ./myprog_execuatble

where ‘X’ is the number of processes to start

You should now be in a position to develop your codes, compile and run them on the H64ACE

cluster!

P a g e | 15

Appendix A

Summary of common bash shell commands.

cd Change Directory

clear Clear terminal screen

cmp Compare two files

cp Copy one or more files to another location

date Display or change the date & time

diff Display the differences between two files

dir Briefly list directory contents

du Estimate file space usage

exit Exit the shell

find Search for files that meet a desired criteria

free Display memory usage

ftp File Transfer Protocol

grep Search file(s) for lines that match a given pattern

gzip Compress or decompress named file(s)

history Command History

hostname Print or set system name

kill stop a process from running

logname Print current login name

logout Exit a login shell

ls List information about file(s)

man Help manual

mkdir Create new folder(s)

more Display output one screen at a time

mv Move or rename files or directories

nohup Run a command immune to hangups

passwd Modify a user password

ps Process status

pwd Print Working Directory

renice Alter priority of running processes

rm Remove files

rmdir Remove folder(s)

sftp Secure File Transfer Program

ssh Secure Shell client (remote login program)

time Measure Program running time

times User and system times

top List processes running on the system

vi Text Editor

which Locate a program file in the user’s path.

who Print all usernames currently logged in

whoami Print the current user id and name (`id -un’)

write Send a message to another user

P a g e | 16

Some examples of their usage and resulting output, if any. Most commands can be further tailored
with command line arguments, to see the full functionality of any command type man followed by

the command e.g. man ls.

cd change the working directory

$ cd /bin

cp copy a file

$ cp file.txt copy_of_file.txt

exit exit the shell

$ exit

ls list directory contents

$ ls

file.txt copy_of_file.txt

mkdir make a directory

$ mkdir newdir

mv move a file

$ mv here.text there.txt

pwd output the name of the current directory

$ pwd

/home/userx

rm remove a file

$ rm copy_of_file.txt

rmdir remove a directory

$ rm newdir

P a g e | 17

References

1. http://en.wikipedia.org/wiki/Scientific_Linux

2. http://www.mcs.anl.gov/research/projects/mpich2/

3. http://www.chiark.greenend.org.uk/~sgtatham/putty/

4. http://filezilla-project.org/

5. http://www.straightrunning.com/XmingNotes/

http://en.wikipedia.org/wiki/Scientific_Linux
http://www.mcs.anl.gov/research/projects/mpich2/
http://www.chiark.greenend.org.uk/~sgtatham/putty/
http://filezilla-project.org/
http://www.straightrunning.com/XmingNotes/