GPG Cluster Usage
The cluster is integrated with the Unix network at Glasgow SoCS, so
you can log in to nodes and launch programs freely, e.g. ssh
gpg-node01. External users who collaborate with the GPG are
welcome to use it. See details below.
Cluster Spec
This is primarily a 20 node (320 core) distributed compute server,
e.g. not a data server. There are no reliability capabilities, so no
Raid array or UPS.
Hardware
1. 20 nodes
2. 16 Intel cores/node (2 * Intel Xeon E5-2640 2GHz)
3. 64Gb RAM per node, i.e. 4Gb RAM/core.
4. Local Disk: 300Gb/node
5. 10Gb Ethernet Interconnect
Software
The operating system is Scientific Linux 6 (similar to CentOS 6) consistent the most SoCS boxes.
Free-for-all Nodes
The free-for-all nodes are gpgnode-01, gpgnode-02, and
gpgnode-03. These are available for development and small scale
measurements with no prior booking.
External users can simply email Phil.Trinder@glasgow.ac.uk to obtain a
Glasgow login that will give access to these nodes.
Bookable Nodes
The remaining 17 nodes, gpgnode-04 .. gpgnode-20 are can be booked for
sole use as a single cluster. Why 17 nodes? Well experience also shows
that 1 node is often unavailable for some hardware/software/user
reason, and with 17 nodes we can reliably measure up to 16 nodes /256
cores.
Access to the remaining 17 nodes is
* restricted to members of a Unix group
* booked using a web-based calendar
* entails joining a mailing list where announcements are made.
Email Douglas Macfarlane and myself {Douglas.MacFarlane,Phil.Trinder}@glasgow.ac.uk if you would like access to these.
Booking Protocol
When you want to use the cluster would you
* Book free slot(s) in the Google calendar
* When your slot arrives have fun!
When booking slots please:
* Book only as many slots as you need, and for as long as you need
* Give your name
* Indicate either All nodes, or the subset of nodes you will use
Let Phil Trinder know of any issues (address above).