- Getting started:
getting account,
login,
NoMachine,
vnc,
RedMine,
- Data:
data access,
data checking,
- Infrastructure:
computing,
queueing system,
scratch area
- Environment:
setup environment,
change password,
change shell,
quota
- Software:
available software and packages,
MARS
- Data Handling:
useful links,
get partial file,
how to read big gzipped fits files
How to communicate problems, feature requests, etc.
- Please insert any request in Redmine.
- Login to redmine is possible with your ISDC account.
- In redmine you can watch the requests to get feedback about the status.
Getting Started
How to get an account for Trac, SVN and data centre access...
- Accounts can be requested via Redmine
- Please include in the request: name, institute, email-address, username you wish (8 chars max.), shell you wish (default bash). Once the account has been
created, the password can be reset here
How to login to the cluster...
- ssh -XY username@isdc-nx.isdc.unige.ch
There are 2 physical nx-login machines. Using isdc-nx redirects you to the first one, i.e. isdc-nx00
From isdc-nx nodes, you can scp to and from the outside world. You can copy files to your account only. While you can copy files from the archive to the outside world also using scp.
- from there, either: login to the FACT cluster, e.g. via isdc-in00 ssh -X username@isdc-in00
Remarks:
- To avoid disconnects you may put in your .ssh/config the options
KeepAlive yes ServerAliveInterval 600.
- To avoid typing the username every time, you may put in your .ssh/config an entry like:
Host isdc-nx
User your_username
Hostname isdc-nx.isdc.unige.ch
KeepAlive yes
ServerAliveInterval 600
ForwardX11 yes
and then do ssh isdc-nx
- If your connection to isdc-nx.isdc.unige.ch takes very long until you are prompted for the password, you can use ssh -o "GSSAPIAuthentication no" to avoid this problem.
How to connect to isdc with NoMachine
Now you have a new NX Servers available, you need the client see how to install for Mac, linux and Windows, you need the key was here.
Put name for session , the hostname of server and import the key you have downloaded
Key
The key can be retrieved from /gpfs/fact/homekeeping/Client.id_dsa.key.txt
Installation
- Download and install the [http://www.nomachine.com/download-client-linux.php NoMachine client for Linux].
- Run /usr/NX/bin/nxclient -wizard to start the connection wizard and follow the Mac OS X instructions from step 3, you'll also find links to "NX Client for Linux" in your Start menu.
- To connect again run /usr/NX/bin/nxclient without any arguments.
How to connect to isdc-in00 with vnc...
- Start vncserver on isdc-in00: vncserver -geometry 1440x900
When you do it for the first time, you are asked to enter a password. You will need this password when accessing the desktop via a vnc viewer.
A vnc server is started. Remember :XX the number of your desktop.
Remark: You need this first step only in case you don't have yet a vncerver running on isdc-in00.
- execute in a terminal on your local machine:
ssh -L 59XX:localhost:59XX -c blowfish -C username@isdc-nx.isdc.unige.ch
with this you are on the nx machine
- once logged on, type the following command:
ssh -L 59XX:localhost:59XX -c blowfish -C username@isdc-in00.isdc.unige.ch
- To connect (from your local machine):
- NOTE: isdc-in00 has its ports open. So to vnc to this machine, only the first tunnel is required (but the viewer command should connect to isdc-in00:XX instead of just
:XX). On the viewer nodes, both tunnels are still required.
- From Linux: vinagre :XX or vncviewer :XX
- From OSX: "Chicken for VNC"
Remarks:
- The vncserver needs to be started only once in the beginning or when the machine on which it is running had been down.
- By default, gnome is started. If you want xfce4 as window manager (on isdc-in00).
E.g. put startxfce4 & in your .vnc/xstartup before starting the vncserver.
RedMine
ISDC handles issues with RedMine
In order to use the system, you MUST register under RedMine, even if you already have an ISDC account.
Then send an email to the system administrators (isdc-system-mgt@unige.ch) so that they add you to the FACT project.
Data
Paths to the FACT data |
general path: | /fact |
compressed raw data: | /fact/raw/YYYY/MM/DD |
slow control data: | /fact/aux/YYYY/MM/DD |
Remarks:
- In La Palma, data are copied from daq to data and afterwards compressed and copied to ISDC.
- You can copy data from ISDC to outside using scp. It can only be done in one step from isdc-nx (directly connected to the internet).
Data Checking
For data checking, the following webpages provide information:
Infrastructure
machine name | purpose | login possible | job submission possible |
isdc-in00/01 | Viewing data | yes | yes |
isdc-fact-build00 | Virtual machine dedicated to compilation | yes | no |
isdc-cnXX | New Queue processing XX currently 09, 10, 11, 12, 13, 14, 15, 16, 17, 18 | no | no |
isdc-in00 | New queue headnode | yes | yes |
isdc-nx | Login and data transfer node | yes | no |
How to run processes on the cluster
- The queueing system Sun Grid Engine is available.
- Job submission is possible from the machines isdc-in00
- Usual queues are available (i.e. fact_short, fact_medium and fact_long).
- Each node of the new cluster has 16 cores and 64GB of ram. 16 jobs max. Per node is currently allowed. You cannot log onto the compute
nodes.
- fact_short uses all available slots, i.e. 16 slots on 12 machines, total 192 slots. Max job time per slot: 1h
- fact_medium uses 8 nodes out of 12 available, total 128 slots. Max job time per slot: 6h
- fact_long uses 4 nodes out of 12 available, total 64 slots. Max job time per slot: 168h (1 week)
s
- The important commands are qsub, qstat and qdel.
- To check which job are in the queue (also those of other users): qstat -u '*'
- To check which queues are available, you can use the command qstat -g c
- To submit a job: qsub -q queuename
- For qsub the following options might be useful:
- -e error_output_file (else the errors are stored in jobname.ejobnumber)
- -o output_file (else the output is stored in jobname.ojobnumber)
- -b yes (if the script or program you submit is not a sun grid engine script)
- -N jobname (in case you want to specify a different jobname than the default)
- To delete jobs from the queue: qdel jobnumer or for all your jobs: qdel -u your_username
The data itself is located where it used to, i.e. /fact
The software installation is the same as with the old cluster, and the custom folder /swdev_nfs is
mounted there too.
REMINDER:
There is NO local scratch any longer. /scratch points to the shared scratch area, while
/scratch_nfs is gone. Remember to set the output filenames properly.
The shared scratch is limited to 15TB of space. Free space can be monitored from
here
The database to La Palma can only be acceesed through lp-fact.isdc.unige.ch which forwards the
network packets directly to La Palma. Change your setting accordingly.
Sequence files are now located in /gpfs/fact/sequences newer ones much be (for now) copied there by
hand.
Link to the ISDC page for this cluster:
here
Monitoring of the cluster activity can be done from this webpage
Scratch area
The scratch area can be found at /gpfs/scratch/fact. Please note that this area /gpfs0/scratch is limited to 15TB of
data. Free space can be obtained by typing "df -h /gpfs0/scratch" on any gateway node, or on the cluster itself.
System load can be monitored from
here
Environment
How to configure your environment variables
- add the following lines to your .tcshrc:
setenv HEADAS /swdev_nfs/heasoft-6.11.1/x86_64-unknown-linux-gnu-libc2.12
source $HEADAS/headas-init.csh
setenv ROOTSYS /swdev_nfs/root_v5.32.00
setenv PATH $ROOTSYS/bin:$PATH
setenv LD_LIBRARY_PATH $ROOTSYS/lib:/swdev_nfs/FACT++/.libs:$LD_LIBRARY_PATH
setenv PATH ${PATH}:/swdev_nfs/FACT++
- or .bashrc:
export HEADAS=/swdev_nfs/heasoft-6.11.1/x86_64-unknown-linux-gnu-libc2.12
source $HEADAS/headas-init.sh
export ROOTSYS=/swdev_nfs/root_v5.32.00
export PATH=$ROOTSYS/bin:$PATH
export LD_LIBRARY_PATH=$ROOTSYS/lib:/swdev_nfs/FACT++/.libs:$LD_LIBRARY_PATH
export PATH=$PATH:/swdev_nfs/FACT++
- Remarks:
for .tcshrc: if LD_LIBRARY_PATH is not used before:
setenv LD_LIBRARY_PATH $ROOTSYS/lib:/swdev_nfs/FACT++/.libs
using pyRoot, you need /swdev_nfs/root_v5.28.00
to use mysql from root or Mars, you need /swdev_nfs/root_v5.32.00
How to change the password
Simply go here . Enter your login and email and you will get a link via
email to a webpage where your password can be changed.
How to change the default shell
Default shells can only be modified by the admins. So you should issue a ticket specifing your username and the desired shell.
Quota
On the new cluster, there is no quota either, but remember that your accounts space is shared with the archive. So the more space you use, the less data can be ingested into the
archive.
Software
Software available on isdc-in00
- Editors: nedit, vi, nano, emacs, xemacs, efte
- Viewer: evince (ps,pdf); gthumb (images)
- Development: gdb, valgrind
Where to find additional software...
- /swdev_nfs/root_v5.26.00 --> Precompiled version
- /swdev_nfs/root_v5.28.00 --> Compiled from sources. To be used with python.
- /swdev_nfs/root_v5.32.00 --> Compiled from sources. MySQL enabled.
- /swdev_nfs/heasoft-6.11.1 --> FTOOLS
- /swdev_nfs/FACT++/viewer --> FACT++ raw events viewer
- /swdev_nfs/FACT++/fitsdump --> FACT++ raw data dumper
- /swdev_nfs/topcat/topcat --> Java fits files viewer
- /swdev_nfs/Mars --> Modular Analysis and Reconstruction Software
Remarks: /swdev_nfs is mounted via nfs on all machines. More information on the installed software in /swdev_nfs/README.txt
How to get MARS...
svn co https://www.fact-project.org/svn/trunk/Mars
Useful Links
Data Handling
How to get part of a file...
gzip -d -c rawfile.fits.gz | ftcopy -'[#row>10&row<100]' selected_data.fits
Further help on ftools and filtering fits files can be found here.
How to access data from big gzipped fits files...
Several ways of doing so:
- Use the raw events viewer (/swdev_nfs/FACT++/viewer) to view individual events. This program is capable of reading these files off-the-shelf.
- Use fits dump (the one from FACT: /swdev_nfs/FACT++/fitsdump). This program can dump columns values, or display headers of big, gzipped files without the need to use gunzip on top of it (new feature !).
- Use the 'fits.h' class from Thomas Bretz in your C++ code. This class is part of the Mars software (see previous point above)
- Use the 'fits.h' in ROOT, as follow:
root [0] gSystem->Load("/usr/lib64/libz.so");
root [1] .L fits.h++
- Use the 'fits.h' in python, as follow:
$rootcint -f my_dict.C -c fits.h izstream.h
$g++ -fPIC -c -I$ROOTSYS/include my_dict.C -o my_dict.o
$g++ -o fitslib.so -shared my_dict.o
$python
Python 2.6.6 (r266:84292, May 20 2011, 16:42:11)
[GCC 4.4.5 20110214 (Red Hat 4.4.5-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from ctypes import *
>>> from ROOT import gSystem
>>> gSystem.Load('fitslib.so')
|